CN110445954B - Image acquisition method and device and electronic equipment - Google Patents

Image acquisition method and device and electronic equipment Download PDF

Info

Publication number
CN110445954B
CN110445954B CN201910680491.1A CN201910680491A CN110445954B CN 110445954 B CN110445954 B CN 110445954B CN 201910680491 A CN201910680491 A CN 201910680491A CN 110445954 B CN110445954 B CN 110445954B
Authority
CN
China
Prior art keywords
image
time point
target
timing
image frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910680491.1A
Other languages
Chinese (zh)
Other versions
CN110445954A (en
Inventor
刘筠璨
王小军
麦海华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Healthcare Shenzhen Co Ltd
Original Assignee
Tencent Healthcare Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Healthcare Shenzhen Co Ltd filed Critical Tencent Healthcare Shenzhen Co Ltd
Priority to CN201910680491.1A priority Critical patent/CN110445954B/en
Publication of CN110445954A publication Critical patent/CN110445954A/en
Application granted granted Critical
Publication of CN110445954B publication Critical patent/CN110445954B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/1066Session management
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/75Media network packet handling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/66Remote control of cameras or camera parts, e.g. by remote control devices
    • H04N23/661Transmitting camera control signals through networks, e.g. control via the Internet
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/2222Prompting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2624Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects for obtaining an image which is composed of whole input images, e.g. splitscreen

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure provides an image acquisition method, an image acquisition device and electronic equipment. The method comprises the following steps: acquiring a streaming media file, wherein the streaming media file comprises one or more image frames containing a target object; carrying out format conversion on the image frame, and acquiring a timing area image from the image frame; identifying time information in the timing area image to determine a starting time point; and determining a target time point according to the starting time point, and acquiring a target image corresponding to the target time point. The method and the device can acquire high-quality images at required time points, and avoid the delay or forgetting to acquire the images of a user; and the interface setting with equipment manufacturers can be reduced, and the rapid adaptation and the rapid landing of the software scheme are improved.

Description

Image acquisition method and device and electronic equipment
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to an image acquisition method, an image acquisition apparatus, and an electronic device.
Background
With the development of image processing technology, users gradually adopt machine processing instead of manual processing in many fields, such as the medical industry, the security industry, the financial industry, and the like, and can perform disease diagnosis, security protection, data processing, and the like through machines.
Taking the situation of using the colposcope equipment to examine cervical diseases as an example, after vinegar staining the cervix, the colposcopic equipment is required to acquire cervix pictures at a plurality of time points. When the examination and identification are started, a doctor needs to perform the operation of stepping on the pedal twice to inform the vinegar dyeing starting time, but the operation interferes and changes the habitual operation of the doctor, and the doctor possibly delays or forgets the operation; in addition, a colposcope equipment manufacturer reminds to collect pictures, the equipment manufacturer needs to inform the start of examination by adding interface communication, certain adaptation cost is needed for the software to be deployed and land, and meanwhile, when the server speed is low, time delay may occur.
In view of this, there is a need in the art to develop a new image acquisition method.
It is to be noted that the information disclosed in the above background section is only for enhancement of understanding of the background of the present disclosure, and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
The embodiment of the disclosure provides an image acquisition method, an image acquisition device and an electronic device, so that the efficiency and the accuracy of image acquisition can be improved at least to a certain extent, a user is prevented from delaying or forgetting to acquire images, the quality of the acquired images is improved, in addition, the interface setting is reduced, and the rapid adaptation and the rapid landing of a software scheme are improved.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows, or in part will be obvious from the description, or may be learned by practice of the disclosure.
According to an aspect of an embodiment of the present disclosure, there is provided an image capturing method including: acquiring a streaming media file, wherein the streaming media file comprises one or more image frames containing a target object; carrying out format conversion on the image frame, and acquiring a timing area image from the image frame; identifying time information in the timing area image to determine a starting time point; and determining a target time point according to the starting time point, and acquiring a target image corresponding to the target time point.
According to an aspect of the embodiments of the present disclosure, there is provided an image capturing apparatus including: the device comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring a streaming media file, and the streaming media file comprises one or more image frames containing a target object; the second acquisition module is used for carrying out format conversion on the image frame and acquiring a timing area image from the image frame; the time identification module is used for identifying the time information in the timing area image so as to determine a starting time point; and the image determining module is used for determining a target time point according to the starting time point and acquiring a target image corresponding to the target time point.
In some embodiments of the present disclosure, based on the foregoing, the second obtaining module is configured to: and calling a function in a computer vision library to decompress the image frame, and converting image information corresponding to the image frame into a digital image matrix.
In some embodiments of the present disclosure, based on the foregoing, the second obtaining module is configured to: acquiring a preset clipping window, wherein the preset clipping window corresponds to a timing area in the image frame; and cutting a timing area in the image frame by adopting the preset cutting window to obtain the timing area image.
In some embodiments of the present disclosure, based on the foregoing, the time identification module is configured to: analyzing the timing area image to acquire an area containing the time information; performing character segmentation on the region containing the time information to obtain a plurality of character regions; identifying characters contained in each character area to acquire a plurality of identification information corresponding to each character area; and matching the plurality of pieces of identification information according to a preset regular expression to determine target characters, and determining the starting time point according to the target characters corresponding to the character areas.
In some embodiments of the present disclosure, based on the foregoing, the image determination module is configured to: after the starting time point is determined, triggering a timer to start timing; and when the timing result of the timer reaches the difference value between the preset time point and the starting time point stored in the timer, taking the preset time point as the target time point, and acquiring a target image corresponding to the target time point.
In some embodiments of the present disclosure, based on the foregoing, the image determination module includes: and the judging unit is used for judging whether the manually collected image and/or the machine collected image corresponding to the target time point exist or not and determining the target image according to the judging result.
In some embodiments of the present disclosure, based on the foregoing scheme, the determining unit includes: the first image determining unit is used for determining the target image from a to-be-selected manually-acquired image set through an image recognition module when the manually-acquired image corresponding to the target time point is judged to exist, wherein the to-be-selected manually-acquired image set comprises a manually-acquired image frame corresponding to the target time point and a manually-acquired image frame corresponding to a time point close to the target time point; and the second image determining unit is used for determining the target image from a machine acquisition image set to be selected through the image recognition module when judging that the machine acquisition image corresponding to the target time point exists and the manual acquisition image corresponding to the target time point does not exist, wherein the machine acquisition image set to be selected comprises an image frame corresponding to the target time point in the image frames after format conversion and an image frame corresponding to a time point close to the target time point.
In some embodiments of the present disclosure, based on the foregoing, the first image determination unit is configured to: inputting the image frames to be selected in the image set collected by the machine to be selected to the image recognition module; identifying objects in the image frame to be selected through the image identification module, and judging whether objects except the target object exist or not; and if the candidate image frame does not exist, taking the candidate image frame as the target image.
According to an aspect of an embodiment of the present disclosure, there is provided an electronic device including: one or more processors; a storage device for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the image acquisition method as described in the above embodiments.
In the technical solutions provided by some embodiments of the present disclosure, format conversion and timing area image extraction are performed on an image frame in an acquired streaming media file, then time information in a timing area image is identified to determine a starting time point, and finally a target time point is determined according to the starting time point, and a target image corresponding to the target time point is acquired. The technical scheme disclosed by the invention can automatically acquire the images at the required time points, so that the time delay or the forgetting of image acquisition of a user is avoided; and can improve the quality of the picture gathered, can also reduce in addition with the interface setting of equipment manufacturer, improve the quick adaptation of software scheme and fall to the ground fast.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure. It is to be understood that the drawings in the following description are merely exemplary of the disclosure, and that other drawings may be derived from those drawings by one of ordinary skill in the art without the exercise of inventive faculty. In the drawings:
fig. 1 shows a schematic diagram of an exemplary system architecture to which technical aspects of embodiments of the present disclosure may be applied;
fig. 2 schematically shows a flow diagram of an image acquisition method according to an embodiment of the present disclosure;
FIG. 3 schematically illustrates a flow diagram for acquiring a timing zone image from an image frame according to one embodiment of the present disclosure;
FIG. 4 schematically illustrates a correspondence diagram of a preset clipping window and a timing zone, according to one embodiment of the present disclosure;
fig. 5 schematically shows a flow diagram for determining a starting point in time according to an embodiment of the present disclosure;
FIG. 6 schematically shows a flow diagram for determining a target image according to one embodiment of the present disclosure;
FIG. 7 schematically illustrates an interface diagram of a message alert box according to one embodiment of the present disclosure;
FIG. 8 schematically shows a schematic structural diagram of an image processing system according to one embodiment of the present disclosure;
fig. 9 schematically shows a flowchart of diagnosing cervical disease according to an embodiment of the present disclosure;
FIG. 10 schematically illustrates a block diagram of an image acquisition device according to one embodiment of the present disclosure;
FIG. 11 illustrates a schematic structural diagram of a computer system suitable for use in implementing an electronic device of an embodiment of the present disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the subject matter of the present disclosure can be practiced without one or more of the specific details, or with other methods, components, devices, steps, and so forth. In other instances, well-known methods, devices, implementations, or operations have not been shown or described in detail to avoid obscuring aspects of the disclosure.
The block diagrams shown in the figures are functional entities only and do not necessarily correspond to physically separate entities. I.e. these functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor means and/or microcontroller means.
The flow charts shown in the drawings are merely illustrative and do not necessarily include all of the contents and operations/steps, nor do they necessarily have to be performed in the order described. For example, some operations/steps may be decomposed, and some operations/steps may be combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.
Fig. 1 shows a schematic diagram of an exemplary system architecture to which the technical solutions of the embodiments of the present disclosure may be applied.
As shown in fig. 1, the system architecture 100 may include a first terminal 101, a second terminal 102, a network 103, and a server 104. The network 103 is used to provide a medium for communication links between the first terminal 101 and the second terminal 102, and between the second terminal 102 and the server 104. Network 104 may include various connection types, such as wired communication links, wireless communication links, and so forth.
It should be understood that the number of first terminals, second terminals, networks and servers in fig. 1 is merely illustrative. There may be any number of first terminals, second terminals, networks and servers, depending on the actual needs. For example, server 104 may be a server cluster comprised of multiple servers, or the like. In addition, the first terminal 101 may be a terminal device for processing a target object and acquiring an image including the target object, and may be, for example, a device including an imaging apparatus or the like; the second terminal 102 may be a terminal device for receiving and processing the image transmitted by the first terminal 101, and may be a terminal device such as a tablet computer, a portable computer, or a desktop computer.
In one embodiment of the present disclosure, the first terminal 101 may collect image frames containing a target object to form a streaming media file; then the first terminal 101 can send the streaming media file to the second terminal 102 through the network 103; after receiving the streaming media file, the second terminal 102 may perform format conversion on image frames therein to form a recognizable digital image matrix, and may also acquire a timing area image from the image frames; the second terminal 102 can then identify the time information in the image of the timing area to determine a starting time point, and determine a target time point according to the starting time point, so as to obtain a target image corresponding to the target time point, specifically, the target image may be determined according to the target time point from the manually captured image and/or the machine captured image stored in the server 104. The technical scheme of the embodiment of the disclosure can automatically acquire the images at the required time points, avoid the delay or forgetting of image acquisition of a user, and improve the quality of the acquired images; in addition, interface setting between a software provider and a manufacturer of the first terminal can be reduced, and rapid adaptation and rapid landing of a software scheme are improved.
It should be noted that the image capturing method provided by the embodiment of the present disclosure is generally executed by the second terminal 102, and accordingly, the image capturing apparatus is generally disposed in the second terminal 102. However, in other embodiments of the present disclosure, the server may also have similar functions as the second terminal, so as to execute the image capturing scheme provided by the embodiments of the present disclosure.
In the related art in the field, images are usually acquired in a manual image acquisition mode, a user performs image acquisition time reminding through a timer to acquire images at a target moment, but the manual image acquisition is delayed or forgotten; in addition, the equipment manufacturer needs to communicate with the interface in advance when reminding the acquisition time, so that time and labor are consumed, and the reminding sent by the equipment manufacturer has 1-2s of delay when the server speed is low.
In view of the problems in the related art, the embodiments of the present disclosure first propose an image acquisition method that can be applied to any image acquisition scenario, such as medical diagnosis, injury detection, safety protection, and so on. In the medical field, cervical cancer is the most prevalent malignant tumor of the female reproductive system, and is the only cancer with definite etiology, early prevention and treatment and hopeful complete elimination in many cancers at present. The primary screening modalities for cervical cancer are divided into three steps: the first step is HPV detection or primary screening of cervical exfoliated cytology (TCT or pap smear), the second step is colposcopy and biopsy of the person who is positive in the primary screening, the third step is cervical pathology confirmation, and the three steps are advanced step by step. In the second stage of colposcopy, if the levels in the normalized sampling chart are irregular, irregular time/insufficient cervical exposure sampling chart have great influence on the accuracy of the AI colposcopy auxiliary diagnosis system.
The details of implementation of the technical solution of the embodiment of the present disclosure are described in detail below by taking diagnosis of cervical disease as an example:
fig. 2 schematically shows a flowchart of an image acquisition method according to an embodiment of the present disclosure, which may be performed by a terminal device, which may be the second terminal 102 shown in fig. 1. Referring to fig. 2, the image capturing method at least includes steps S210 to S240, which are described in detail as follows:
in step S210, a streaming media file is acquired, the streaming media file including one or more image frames containing a target object.
In one embodiment of the present disclosure, the image of the cervix uteri of the patient may be acquired through the first terminal 101, specifically, the first terminal 101 may be a colposcope device, after the doctor performs the pretreatment on the affected part of the patient, the lighting switch of the colposcope device is turned on, the objective lens is adjusted to be at the same level as the detected part, and then the objective lens distance and the objective lens focal length are adjusted until the object image is clear; then, the doctor can use a cotton ball dipped with 3% -5% acetic acid to wipe the surface of the cervix, and then observe the color and state change of the surface of the cervix. The colposcope equipment is provided with a display screen, and a doctor can observe the change of the amplified cervical surface through the display screen, acquire cervical pictures at different time points and diagnose cervical diseases according to the change of cervical colors and cervical states in the cervical pictures corresponding to the different time points.
In an embodiment of the present disclosure, to avoid delay or forgetting to take images by a doctor, the second terminal 102 connected to the first terminal 101 may be used to automatically take cervical images at different time points, and further, the second terminal 102 may be used to remind the doctor to take images in time and save the cervical images at different time points. Specifically, a streaming media file in the first terminal 101 may be acquired, an image frame including the target object may be acquired from the streaming media file, and then one or more cervical images may be acquired therefrom, and a target image for disease diagnosis may be acquired by processing these images. Streaming media refers to a technology and a process of compressing a series of media data, sending the data in segments on the network, and transmitting images on the network in real time.
In step S220, the image frame is format-converted while acquiring a timing area image from the image frame.
In one embodiment of the present disclosure, the image frames in the streaming media file are generally compressed images, for example, compressed images in a jpg format, and if the image frames in the format are directly extracted in the subsequent operation process, there may be a case where the image frames cannot be read, so that the format of the image frames in the streaming media file can be converted. When format conversion is performed, a function in a computer vision library can be called to decompress an image frame and convert image information corresponding to the image frame into a digital image matrix, specifically, the computer vision library can be opencv, when format conversion is performed on the image frame, the image frame can be converted into a mat type, the mat type is a widely used one of opencv, one of the most important functions of the mat type is to serve as a data structure for storing the image, the image is generally divided into a color image and a gray image, and the cervical image is generally a color image.
In one embodiment of the present disclosure, since it is necessary to take cervical pictures at a plurality of time points after the start of vinegar staining, it is crucial to determine the start time of vinegar staining. In the embodiment of the present disclosure, the time information in the image frame may be identified to determine the starting time point of the vinegar staining, and specifically, the time zone image may be acquired from the image frame and then the time information in the time zone image may be identified.
Fig. 3 shows a schematic flow chart of acquiring a timing zone image from an image frame, and as shown in fig. 3, the method of acquiring a timing zone image from an image frame at least comprises steps S301-S302. Specifically, the method comprises the following steps:
in step S301, a preset cropping window is acquired, which corresponds to a timing region in an image frame.
In an embodiment of the present disclosure, since different manufacturers of the apparatus have different functional partitions in the display screen of the colposcopic apparatus, for example, when vinegar staining starts, the upper left corner of the display screen of some colposcopic apparatus displays timing, and the lower right corner of the display screen of some colposcopic apparatus displays timing, the position setting of the timing region may also be different for different models of colposcopic apparatus. In order to acquire the timing region image, different preset clipping windows may be set according to the position of the timing region in the colposcopic apparatus, the preset clipping windows corresponding to the timing region in the image frame. Fig. 4 shows the correspondence relationship between the preset clipping window and the timing area, as shown in fig. 4, a timing area with a size of 2cm × 1cm is disposed at the upper right corner of the image frame, coordinates of four vertices A, B, C, D in the image frame are (25,15), (27,16) and (25, 16) in sequence, coordinate values of vertices of the preset clipping window can be determined according to coordinate values of a point A, B, C, D, the preset clipping window can be slightly larger than the timing area, for example, coordinates of vertices a ', B', C 'and D' of the preset clipping window can be set to (24,14), (28,17) and (24, 17), so that the preset clipping window can completely cover the timing area, and further can cover the complete timing area information, and certainly, vertex coordinates of the preset clipping window can also be set to the same value as coordinate values of vertices of the timing area, this allows the preset cropping window to just cover the timing zone.
In step S302, a timing region in the image frame is clipped using a preset clipping window to acquire a timing region image.
In an embodiment of the present disclosure, after receiving an image frame in a streaming media file, the second terminal 102 may clip a timing area in the image frame by using a preset clipping window to obtain a timing area image only containing time information.
In step S230, time information in the time zone image is identified to determine a start time point.
In an embodiment of the disclosure, after the timing area is cropped by using the preset cropping window to obtain the timing area image, the time information in the timing area image may be identified to determine the starting time point. In recognizing the time information, time recognition can be performed based on character recognition, fig. 5 shows a flow chart for determining a starting time point, as shown in fig. 5, in step S501, a timing region image is analyzed to obtain a region containing the time information; analyzing the timing area image mainly comprises analyzing the image layout and determining an area containing characters, wherein the area containing the characters is an area containing time information; in step S502, character segmentation is performed on the region including the time information to obtain a plurality of character regions; after obtaining the region containing the time information, the time information needs to be divided to obtain the region containing only a single character, and the character in the region is identified, for example, the time information is 00:00, and four regions containing "0" and one region containing "can be obtained by character division: "of; in step S503, characters included in each character area are recognized to acquire a plurality of pieces of recognition information corresponding to each character area; after five character areas are obtained, the characters in the five areas can be recognized to obtain character information therein, and since a plurality of recognition results may occur in the process of recognizing each character, for example, when the character "0" is recognized, recognition results such as "0", "Q", "o", "D" may be obtained, a plurality of recognition information can be obtained after character recognition is performed on each area; in step S504, matching the plurality of pieces of identification information according to a preset regular expression to determine target characters, and determining an initial time point according to the target characters corresponding to each character region; the preset regular expression can be specifically/[ 0oOQD ] {2}, and/or similar characters can be considered as '0' according to the preset regular expression, so that the fault tolerance rate of the recognized characters is improved.
In one embodiment of the present disclosure, the optical character recognition technology may be used to recognize the time information in the timing area image to determine the starting time point, and specifically, the tesseract-ocr recognition engine may be used to recognize the time information in the timing area image and determine the starting time point according to the recognition result. When vinegar dyeing is completed and timing is started, timing is usually started from 00:00, so that only 00:00 needs to be recognized during recognition, and different time information can be recognized according to different time settings of different manufacturers to determine the starting time point. It is to be noted that, in the embodiment of the present disclosure, when identifying the time information in the image of the timing area, the identification determination of the last bit "0" may also be omitted, for example, the identification at the time of 00:01, 00:05, or the like may also be used as a signal indicating that the identification is successful, which may ensure that the flow may continue, and improve the feasibility of the image acquisition method in the embodiment of the present disclosure.
In step S240, a target time point is determined according to the start time point, and a target image corresponding to the target time point is acquired.
In one embodiment of the present disclosure, after determining the starting time point, a timer may be triggered to count to determine the target time point. According to the colposcopy procedure, the physician needs to observe the changes of the entire cervix within 2 minutes and 30 seconds after vinegar staining to determine the condition of the patient, and within the 2 minutes and 30 seconds, there are some time nodes in the medical guideline that the physician needs to focus on observing and acquiring images for subsequent analysis, which are generally 60s, 90s, 120s and 150 s. After the timer is triggered to start timing, whether the target time point is reached or not can be judged according to the timing result of the timer and the size relation of the difference value between the preset time point and the starting time point stored in the timer, wherein the preset time point is the same as the target time point. When the timing result of the timer reaches the difference value between the preset time point and the starting time point stored in the timer, the target time point can be judged to be reached, and then the target image corresponding to the target time point can be obtained.
In an embodiment of the present disclosure, when the target time point is reached, the second terminal 102 may automatically acquire the image frame corresponding to the target time point from the image frame subjected to format conversion, and at the same time, a doctor operating the colposcopy device may also manually acquire an image, so that when the target image corresponding to the target time point is acquired, it is required to determine whether a manually acquired image and/or a machine acquired image corresponding to the target time point exists, and determine the target image according to the determination result.
FIG. 6 is a schematic flow chart illustrating the process of determining a target image, and as shown in FIG. 6, in step S601, it is determined whether there is a manually captured image and/or a machine captured image corresponding to a target time point; in step S602, when it is determined that there is an artificially acquired image corresponding to the target time point, determining, by the image recognition module, a target image from an artificially acquired image set to be selected, where the artificially acquired image set to be selected includes an artificially acquired image frame corresponding to the target time point and an artificially acquired image frame corresponding to a time point adjacent to the target time point; in step S603, when it is determined that there is a machine-acquired image corresponding to the target time point and there is no artificially acquired image corresponding to the target time point, determining the target image from the candidate machine-acquired image set by the image recognition module, where the candidate machine-acquired image set includes a format-converted image frame corresponding to the target time point and a format-converted image frame corresponding to a time point adjacent to the target time point.
In an embodiment of the present disclosure, since the image integrity has a great influence on the later diagnosis of the doctor, when determining the target image, the quality of the image needs to be controlled, for example, for a cervical image containing a foreign object (such as a cotton swab, forceps, etc.), it needs to be excluded, and only the cervical image containing no foreign object and having a clear picture can be used as the target image, in steps S602 and S603, the target image can be determined from a plurality of image frames, for example, in step S602, when there is a candidate manually-captured image set containing a manually-captured image frame corresponding to the target time point and a manually-captured image frame corresponding to an adjacent time point, the manually-captured image frame therein can be input to the image recognition module, the object in each manually-captured image frame is recognized by the image recognition module to determine whether there is a foreign object therein, acquiring a qualified target manual acquisition image, and further taking the qualified target manual acquisition image as a target image; similarly, in step S603, when there is an image set to be selected including the format-converted image corresponding to the target time point and the format-converted image corresponding to the adjacent time point, the image frame to be selected may be input to the image recognition module, and the image recognition module recognizes an object in the image frame to be selected, and determines whether there is a foreign object other than the target object, if so, it indicates that the image frame to be selected is not qualified and cannot be used as the target image; if the candidate image frame does not exist, the candidate image frame is qualified and can be used as a target image. Where the time points adjacent to the target time point may be respective time points within a period of time from the start of the target time point, e.g. the target time point is 60s and the period of time is 4s, all image frames within 60s-64s may be identified to determine the target image.
Further, since the sharpness also has a large influence on the diagnosis result, it is necessary to further determine whether or not the image frame in which the foreign object is not present can be a target image. When the definition of the image frame is judged, the judgment can be carried out according to the pixel gradient, specifically, the pixel gradient of the image frame in the X-axis direction and the Y-axis direction can be calculated, then the pixel gradient is compared with a gradient threshold value, and when the pixel gradient in the X-axis direction and the Y-axis direction is greater than the gradient threshold value, the definition of the image frame is higher and can be used as a target image; when the pixel gradients in the X-axis direction and the Y-axis direction are less than or equal to the gradient threshold value, the definition of the image frame is poor, and the image frame cannot be used as a target image.
In an embodiment of the present disclosure, when the target time point is reached, the second terminal 102 may further send a prompt message to the first terminal 101 to prompt the doctor to pick up the picture in time, and accordingly, a message prompt box may be displayed on the display screen of the first terminal 101. Fig. 7 shows an interface schematic diagram of a message reminding frame, and as shown in fig. 7, a message reminding frame pops up at the lower right corner of a display screen of the first terminal 101, wherein the message reminding frame includes time prompting information "60 s from vinegar dyeing start time" and a reminder "please collect an image at that time", and after receiving the message reminding frame, a doctor can collect the image by triggering a corresponding key or button in the colposcope equipment.
Of course, the image acquisition method in the embodiment of the present disclosure may also be applied to other scenes, such as safety protection, damage detection, and the like, and images at multiple time points may be acquired by the image acquisition method in the embodiment of the present disclosure, and whether intrusion, damage, and the like exist is determined according to the images at the multiple time points.
In order to determine the damaged part and the damage degree, the part may be placed in a specific environment, for example, in an acidic or alkaline solution, the starting time point of the reaction is determined by a character recognition technology, the change of the color and the state of each part of the part is observed, then the image of each part is obtained at a plurality of target time points, finally, the image corresponding to the same part may be input to an image recognition module, and the details in each image are recognized by the image recognition module to determine whether the part has damage and the damage degree.
According to the technical scheme of the embodiment of the invention, format conversion and character recognition are carried out on the image frames in the streaming media to determine the starting time point, the target time point is determined according to the starting time point, and the target image corresponding to the target time point is determined from the manually collected image and/or the machine collected image, so that the user is prevented from delaying or forgetting to collect the image, and the high-quality image required by later analysis is ensured; in addition, the technical scheme of the embodiment of the disclosure also avoids making an interface with an equipment manufacturer, so that the software scheme corresponding to the image acquisition method can be quickly adapted and quickly fall to the ground.
An image processing system can be constructed based on the image acquisition method in the present disclosure, fig. 8 shows a schematic structural diagram of the image processing system, as shown in fig. 8, an image processing system 800 includes an image acquisition module 801 and an image recognition module 802, where the image acquisition module 801 is used to execute the image acquisition method in the embodiment of the present disclosure, and the image recognition module 802 is used to perform image recognition according to the acquired image and make a corresponding conclusion according to a recognition result. Further, the image recognition module 802 is an Artificial Intelligence (AI) based image recognition module, which is a theory, method, technique, and application system that simulates, extends, and expands human Intelligence, senses the environment, acquires knowledge, and uses knowledge to obtain the best results using a digital computer or a machine controlled by a digital computer. In other words, artificial intelligence is a comprehensive technique of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that can react in a manner similar to human intelligence. Artificial intelligence is the research of the design principle and the realization method of various intelligent machines, so that the machines have the functions of perception, reasoning and decision making.
Continuing with the example of diagnosis of cervical disease, fig. 9 shows a flowchart of diagnosis of cervical disease, as shown in fig. 9, in step S901, colposcopy is started; opening the colposcope equipment, adjusting an objective lens, and starting examination after preparation work is done; in step S902, an original image is collected; before vinegar staining, acquiring an original cervical image through a colposcope device; in step S903, vinegar staining is started and timed; after vinegar dyeing, triggering video recording in the first terminal 101, so that the first terminal 101 can shoot and send the acquired streaming media file to the second terminal 102; in step S904, an image corresponding to the target time point is acquired; after receiving the streaming media file, the second terminal 102 may process image frames therein through the image acquisition module 801 to determine an initial time point, and determine a target time point according to the initial time point, so as to obtain a target image corresponding to the target time point, where the target time point may be specifically 60 th, 90 th, 120 th, and 150 th after vinegar staining starts, and four corresponding target images may be obtained according to the target time points; in step S905, a diagnosis and treatment suggestion is made according to the acquired image; after the original image and the target image are obtained, the original image and the four target images can be input into the image recognition module 802, and each image is analyzed and recognized by the image recognition module 802, so as to judge whether a cervical disease exists in a patient and the severity of the cervical disease according to the images corresponding to different time points, and further make a corresponding diagnosis and treatment suggestion, for example, a diagnosis and treatment suggestion that a follow-up visit is given to a patient with a light disease condition, and a diagnosis and treatment suggestion that a biopsy is given to a patient with a heavy disease condition.
Embodiments of the apparatus of the present disclosure are described below, which may be used to perform the image capturing method in the above-described embodiments of the present disclosure. For details that are not disclosed in the embodiments of the apparatus of the present disclosure, please refer to the embodiments of the image capturing method described above in the present disclosure.
Fig. 10 schematically shows a block diagram of an image acquisition apparatus according to an embodiment of the present disclosure.
Referring to fig. 10, an image capturing apparatus 1000 according to an embodiment of the present disclosure includes: a first acquisition module 1001, a second acquisition module 1002, a time identification module 1003 and an image determination module 1004.
The first obtaining module 1001 is configured to obtain a streaming media file, where the streaming media file includes one or more image frames containing a target object; a second obtaining module 1002, configured to perform format conversion on an image frame, and obtain a timing area image from the image frame at the same time; a time identification module 1003, configured to identify time information in the image of the timing area to determine a starting time point; and an image determining module 1004, configured to determine a target time point according to the starting time point, and acquire a target image corresponding to the target time point.
In one embodiment of the present disclosure, the second obtaining module 1002 is configured to: and calling functions in a computer vision library to decompress the image frames and converting image information corresponding to the image frames into a digital image matrix.
In one embodiment of the present disclosure, the second obtaining module 1002 is configured to: acquiring a preset cutting window, wherein the preset cutting window corresponds to a timing area in the image frame; and cutting the timing area in the image frame by adopting a preset cutting window to obtain a timing area image.
In one embodiment of the present disclosure, the time identification module 1003 is configured to: analyzing the timing area image to obtain an area containing time information; performing character segmentation on the region containing the time information to obtain a plurality of character regions; identifying characters contained in each character area to acquire a plurality of identification information corresponding to each character area; and matching the plurality of pieces of identification information according to a preset regular expression to determine target characters, and determining starting time points according to the target characters corresponding to the character areas.
In one embodiment of the present disclosure, the image determination module 1004 is configured to: after the initial time point is determined, triggering a timer to start timing; and when the timing result of the timer reaches the difference value between the preset time point and the starting time point stored in the timer, taking the preset time point as a target time point, and acquiring a target image corresponding to the target time point.
In one embodiment of the present disclosure, the image determination module 1004 includes: and the judging unit is used for judging whether an artificially collected image and/or a machine collected image corresponding to the target time point exist or not and determining the target image according to the judging result.
In one embodiment of the present disclosure, the judging unit includes: the first image determining unit is used for determining a target image from a to-be-selected manually-acquired image set through the image recognition module when the manually-acquired image corresponding to the target time point is judged to exist, wherein the to-be-selected manually-acquired image set comprises a manually-acquired image frame corresponding to the target time point and a manually-acquired image frame corresponding to a time point close to the target time point; and the second image determining unit is used for determining the target image from the to-be-selected machine collected image set through the image recognition module when judging that the machine collected image corresponding to the target time point exists and the manual collected image corresponding to the target time point does not exist, wherein the to-be-selected machine collected image set comprises a format-converted image frame corresponding to the target time point and a format-converted image frame corresponding to a time point close to the target time point.
In one embodiment of the present disclosure, the first image determination unit is configured to: inputting a to-be-selected image frame in an image set acquired by a to-be-selected machine to an image recognition module; identifying objects in the image frame to be selected through an image identification module, and judging whether objects except for the target object exist or not; and if the candidate image frame does not exist, taking the candidate image frame as a target image.
FIG. 11 illustrates a schematic structural diagram of a computer system suitable for use in implementing an electronic device of an embodiment of the present disclosure.
It should be noted that the computer system 1100 of the electronic device shown in fig. 11 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 11, a computer system 1100 includes a Central Processing Unit (CPU)1101, which can perform various appropriate actions and processes according to a program stored in a Read-Only Memory (ROM) 1102 or a program loaded from a storage section 1108 into a Random Access Memory (RAM) 1103, and implements the image capturing method described in the above-described embodiment. In the RAM 1103, various programs and data necessary for system operation are also stored. The CPU 1101, ROM 1102, and RAM 1103 are connected to each other by a bus 1104. An Input/Output (I/O) interface 1105 is also connected to bus 1104.
The following components are connected to the I/O interface 1105: an input portion 1106 including a keyboard, mouse, and the like; an output section 1107 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, a speaker, and the like; a storage section 1108 including a hard disk and the like; and a communication section 1109 including a Network interface card such as a LAN (Local Area Network) card, a modem, or the like. The communication section 1109 performs communication processing via a network such as the internet. A driver 1110 is also connected to the I/O interface 1105 as necessary. A removable medium 1111 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 1110 as necessary, so that a computer program read out therefrom is mounted into the storage section 1108 as necessary.
In particular, the processes described below with reference to the flowcharts may be implemented as computer software programs, according to embodiments of the present disclosure. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication portion 1109 and/or installed from the removable medium 1111. When the computer program is executed by a Central Processing Unit (CPU)1101, various functions defined in the system of the present disclosure are executed.
It should be noted that the computer readable medium shown in the embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a Read-Only Memory (ROM), an Erasable Programmable Read-Only Memory (EPROM), a flash Memory, an optical fiber, a portable Compact Disc Read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer-readable signal medium may include a propagated data signal with computer-readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software, or may be implemented by hardware, and the described units may also be disposed in a processor. Wherein the names of the elements do not in some way constitute a limitation on the elements themselves.
As another aspect, the present disclosure also provides a computer-readable medium, which may be contained in the electronic device described in the above embodiments; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by an electronic device, cause the electronic device to implement the method described in the above embodiments.
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a server, a touch terminal, or a network device, etc.) to execute the method according to the embodiments of the present disclosure.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. An image acquisition method, comprising:
acquiring a streaming media file, wherein the streaming media file comprises one or more image frames containing a target object;
carrying out format conversion on the image frame, and acquiring a timing area image from the image frame;
identifying time information in the timing area image to determine a starting time point;
after the starting time point is determined, triggering a timer to start timing;
and when the timing result of the timer reaches the difference value between the preset time point and the starting time point stored in the timer, taking the preset time point as a target time point, and acquiring a target image corresponding to the target time point.
2. The image acquisition method of claim 1, wherein the format converting the image frames comprises:
and calling a function in a computer vision library to decompress the image frame, and converting image information corresponding to the image frame into a digital image matrix.
3. The image acquisition method according to claim 1, wherein said acquiring a timing region image from said image frame comprises:
acquiring a preset clipping window, wherein the preset clipping window corresponds to a timing area in the image frame;
and cutting a timing area in the image frame by adopting the preset cutting window to obtain the timing area image.
4. The image acquisition method according to claim 1, wherein the identifying time information in the timing region image to determine a starting time point comprises:
analyzing the timing area image to acquire an area containing the time information;
performing character segmentation on the region containing the time information to obtain a plurality of character regions;
identifying characters contained in each character area to acquire a plurality of identification information corresponding to each character area;
and matching the plurality of pieces of identification information according to a preset regular expression to determine target characters, and determining the starting time point according to the target characters corresponding to the character areas.
5. The image capturing method according to claim 1, wherein the acquiring of the target image corresponding to the target time point comprises:
and judging whether an artificially collected image and/or a machine collected image corresponding to the target time point exist or not, and determining the target image according to a judgment result.
6. The image capturing method according to claim 5, wherein the determining whether there is a manually captured image and/or a machine captured image corresponding to the target time point, and determining the target image according to the determination result includes:
when the artificially acquired image corresponding to the target time point is judged to exist, determining the target image from an artificially acquired image set to be selected through an image recognition module, wherein the artificially acquired image set to be selected comprises an artificially acquired image frame corresponding to the target time point and an artificially acquired image frame corresponding to a time point close to the target time point;
when judging that a machine collected image corresponding to the target time point exists and an artificial collected image corresponding to the target time point does not exist, determining the target image from a machine collected image set to be selected through the image recognition module, wherein the machine collected image set to be selected comprises a format-converted image frame corresponding to the target time point and a format-converted image frame corresponding to a time point close to the target time point.
7. The image acquisition method according to claim 6, wherein the determining, by the image recognition module, the target image from a candidate set of machine-acquired images comprises:
inputting the image frames to be selected in the image set collected by the machine to be selected to the image recognition module;
identifying objects in the image frame to be selected through the image identification module, and judging whether objects except the target object exist or not;
and if the candidate image frame does not exist, taking the candidate image frame as the target image.
8. An image acquisition apparatus, comprising:
the device comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring a streaming media file, and the streaming media file comprises one or more image frames containing a target object;
the second acquisition module is used for carrying out format conversion on the image frame and acquiring a timing area image from the image frame;
the time identification module is used for identifying the time information in the timing area image so as to determine a starting time point;
the image determining module is used for triggering a timer to start timing after the initial time point is determined; and when the timing result of the timer reaches the difference value between the preset time point and the starting time point stored in the timer, taking the preset time point as a target time point, and acquiring a target image corresponding to the target time point.
9. An electronic device, comprising:
one or more processors;
storage means for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to carry out the image acquisition method according to any one of claims 1 to 7.
10. A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, which program, when being executed by a processor, carries out the image acquisition method as set forth in any one of claims 1 to 7.
CN201910680491.1A 2019-07-26 2019-07-26 Image acquisition method and device and electronic equipment Active CN110445954B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910680491.1A CN110445954B (en) 2019-07-26 2019-07-26 Image acquisition method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910680491.1A CN110445954B (en) 2019-07-26 2019-07-26 Image acquisition method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN110445954A CN110445954A (en) 2019-11-12
CN110445954B true CN110445954B (en) 2022-04-26

Family

ID=68431591

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910680491.1A Active CN110445954B (en) 2019-07-26 2019-07-26 Image acquisition method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN110445954B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113965729B (en) * 2021-10-29 2023-12-26 深圳供电局有限公司 Regional safety monitoring system and method

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104581436A (en) * 2015-01-28 2015-04-29 青岛海信宽带多媒体技术有限公司 Video frame positioning method and device

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5780142B2 (en) * 2011-12-07 2015-09-16 富士通株式会社 Image processing apparatus and image processing method
CN106131660B (en) * 2016-07-15 2019-08-09 青岛海信宽带多媒体技术有限公司 Video location playback method and device
CN108632540B (en) * 2017-03-23 2020-07-03 北京小唱科技有限公司 Video processing method and device
CN108320318B (en) * 2018-01-15 2023-07-28 腾讯科技(深圳)有限公司 Image processing method, device, computer equipment and storage medium
CN108897899A (en) * 2018-08-23 2018-11-27 深圳码隆科技有限公司 The localization method and its device of the target area of a kind of pair of video flowing
CN110490851B (en) * 2019-02-15 2021-05-11 腾讯科技(深圳)有限公司 Mammary gland image segmentation method, device and system based on artificial intelligence

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104581436A (en) * 2015-01-28 2015-04-29 青岛海信宽带多媒体技术有限公司 Video frame positioning method and device

Also Published As

Publication number Publication date
CN110445954A (en) 2019-11-12

Similar Documents

Publication Publication Date Title
EP3611915B1 (en) Method and apparatus for image processing
US20190102878A1 (en) Method and apparatus for analyzing medical image
CN111488921A (en) Panoramic digital pathological image intelligent analysis system and method
CN108830149B (en) Target bacterium detection method and terminal equipment
CN110930296A (en) Image processing method, device, equipment and storage medium
CN107123124B (en) Retina image analysis method and device and computing equipment
CN113888518A (en) Laryngopharynx endoscope tumor detection and benign and malignant classification method based on deep learning segmentation and classification multitask
CN112767392A (en) Image definition determining method, device, equipment and storage medium
CN112786163B (en) Ultrasonic image processing display method, system and storage medium
US11449991B2 (en) Image processing method, image processing apparatus, and storage medium
CN110473176B (en) Image processing method and device, fundus image processing method and electronic equipment
CN110445954B (en) Image acquisition method and device and electronic equipment
JP2018084861A (en) Information processing apparatus, information processing method and information processing program
CN113158773B (en) Training method and training device for living body detection model
CN112288697B (en) Method, apparatus, electronic device and readable storage medium for quantifying degree of abnormality
KR20220012407A (en) Image segmentation method and apparatus, electronic device and storage medium
CN110349108B (en) Method, apparatus, electronic device, and storage medium for processing image
WO2023231479A1 (en) Pupil detection method and apparatus, and storage medium and electronic device
JP6935663B1 (en) Oral mucosal disease diagnosis support system, method and program
CN112184733A (en) Cervical abnormal cell detection device and method
US20190057271A1 (en) Image processing method, photographing device and storage medium
CN111275045A (en) Method and device for identifying image subject, electronic equipment and medium
JP6503733B2 (en) Diagnosis support apparatus, image processing method in the diagnosis support apparatus, and program thereof
CN114332844B (en) Intelligent classification application method, device, equipment and storage medium of medical image
KR102633823B1 (en) Apparatus for discriminating medical image and method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20210927

Address after: 518052 Room 201, building A, 1 front Bay Road, Shenzhen Qianhai cooperation zone, Shenzhen, Guangdong

Applicant after: Tencent Medical Health (Shenzhen) Co.,Ltd.

Address before: 518000 Tencent Building, No. 1 High-tech Zone, Nanshan District, Shenzhen City, Guangdong Province, 35 Floors

Applicant before: TENCENT TECHNOLOGY (SHENZHEN) Co.,Ltd.

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant