WO2022082999A1 - Object recognition method and apparatus, and terminal device and storage medium - Google Patents

Object recognition method and apparatus, and terminal device and storage medium Download PDF

Info

Publication number
WO2022082999A1
WO2022082999A1 PCT/CN2020/140419 CN2020140419W WO2022082999A1 WO 2022082999 A1 WO2022082999 A1 WO 2022082999A1 CN 2020140419 W CN2020140419 W CN 2020140419W WO 2022082999 A1 WO2022082999 A1 WO 2022082999A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
candidate frame
target candidate
category
target image
Prior art date
Application number
PCT/CN2020/140419
Other languages
French (fr)
Chinese (zh)
Inventor
黄冠文
程骏
庞建新
谭欢
熊友军
Original Assignee
深圳市优必选科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市优必选科技股份有限公司 filed Critical 深圳市优必选科技股份有限公司
Publication of WO2022082999A1 publication Critical patent/WO2022082999A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/255Detecting or recognising potential candidate objects based on visual cues, e.g. shapes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Definitions

  • the present application belongs to the technical field of artificial intelligence, and in particular, relates to an object recognition method, device, terminal device and storage medium.
  • artificial intelligence products can perform object recognition on specific objects to identify the category and location information of the object.
  • Embodiments of the present application provide an object recognition method, apparatus, terminal device and storage medium, which aim to solve the problems of low accuracy and stability of existing target object recognition.
  • an embodiment of the present application provides an object recognition method, including:
  • the category of the object in the third target candidate frame with the shortest distance between the center position and the preset center position of the target image is obtained, and the object in the target image is obtained. Identify the results and output.
  • an object recognition device including:
  • a detection module for acquiring a target image and detecting the image quality of the target image
  • the recognition module is configured to perform object recognition on the target image when the image quality of the target image satisfies the first preset condition, and obtain at least one candidate frame, the category of the object in each candidate frame, and each selected frame. The confidence of the category of the object in the candidate box;
  • a first obtaining module configured to obtain a candidate frame of which the confidence of the category of the object is greater than a first preset threshold from all the candidate frames, to obtain a first target candidate frame;
  • the filtering module is used to filter the first target candidate frame whose object category is the same category to obtain the second target candidate frame; wherein, the categories of objects between each two obtained second target candidate frames are different;
  • the second obtaining module is configured to calculate the coincidence degree for all the second target candidate frames, and obtain one of the two second target candidate frames whose coincidence degree is greater than the second preset threshold, and obtain a third target candidate frame;
  • the third obtaining module is configured to obtain the category of the object in the third target candidate frame with the shortest distance between the center position and the preset center position of the target image when a plurality of the third target candidate frames are obtained, and obtain The recognition result of the object in the target image is output.
  • an embodiment of the present application provides a terminal device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, when the processor executes the computer program The steps of implementing the above object recognition method.
  • an embodiment of the present application provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the steps of the object recognition method are implemented.
  • an embodiment of the present application provides a computer program product, which, when the computer program product runs on an electronic device, causes the electronic device to execute the steps of the above-mentioned object recognition method.
  • the embodiments of the present application have the following beneficial effects: the embodiments of the present application can acquire a target image and detect the image quality of the target image; when the image quality of the target image satisfies the first preset condition, Perform object recognition on the target image to obtain at least one candidate frame, the category of the object in each candidate frame, and the confidence level of the category of the object in each candidate frame; obtain the object's information from all the candidate frames.
  • a candidate frame whose category confidence is greater than the first preset threshold value is obtained, and a first target candidate frame is obtained; the first target candidate frame whose category is the same category is filtered to obtain a second target candidate frame; wherein, every two target candidate frames obtained The categories of the objects in the second target candidate frame are different; the coincidence degree is calculated for all the second target candidate frames, and one of the two second target candidate frames whose coincidence degree is greater than the second preset threshold is obtained, Obtain a third target candidate frame; when multiple third target candidate frames are obtained, obtain the category of the object in the third target candidate frame with the shortest distance between the center position and the preset center position of the target image, and obtain The recognition result of the object in the target image is output.
  • the confidence level of the category of the object is obtained from all the candidate boxes, the confidence level of the object category is greater than the first preset threshold, which can reduce the false identification of the background as an object, and filter the first target candidate box of the same category. , filter the second target candidate frames of different categories according to the degree of coincidence to obtain the third target candidate frame, and then obtain the third target candidate frame with the shortest distance between the center position and the preset center position of the target image
  • the classification of the object can be obtained to obtain the recognition result, which can improve the accuracy and stability of the target object recognition.
  • FIG. 1 is a schematic flowchart of an object recognition method provided by an embodiment of the present application.
  • FIG. 2 is a schematic flowchart of an object recognition method provided by an embodiment of the present application.
  • FIG. 3 is a specific schematic flowchart of step S105 provided by an embodiment of the present application.
  • FIG. 4 is a schematic structural diagram of an object recognition device provided by an embodiment of the present application.
  • FIG. 5 is a schematic structural diagram of a terminal device provided by an embodiment of the present application.
  • the term “if” may be contextually interpreted as “when” or “once” or “in response to determining” or “in response to detecting “.
  • the phrases “if it is determined” or “if the [described condition or event] is detected” may be interpreted, depending on the context, to mean “once it is determined” or “in response to the determination” or “once the [described condition or event] is detected. ]” or “in response to detection of the [described condition or event]”.
  • references in this specification to "one embodiment” or “some embodiments” and the like mean that a particular feature, structure or characteristic described in connection with the embodiment is included in one or more embodiments of the present application.
  • appearances of the phrases “in one embodiment,” “in some embodiments,” “in other embodiments,” “in other embodiments,” etc. in various places in this specification are not necessarily All refer to the same embodiment, but mean “one or more but not all embodiments” unless specifically emphasized otherwise.
  • the terms “including”, “including”, “having” and their variants mean “including but not limited to” unless specifically emphasized otherwise.
  • the object recognition method provided by the embodiments of the present application can be applied to robots, cameras, mobile phones, tablet computers, wearable devices, augmented reality (AR)/virtual reality (VR) devices, notebook computers, super mobile devices
  • AR augmented reality
  • VR virtual reality
  • terminal devices such as a personal computer (ultra-mobile personal computer, UMPC), a netbook, and a personal digital assistant (personal digital assistant, PDA)
  • the embodiments of the present application do not impose any restrictions on the specific type of the terminal device.
  • the camera can be a rotatable and auto-focusable camera, for example, a pan-tilt camera with both rotatable and auto-focusing functions, a spherical camera, and the like.
  • the robot can be a service robot, an entertainment robot, a military robot, an agricultural robot, etc., wherein the service robot and the entertainment robot can be a bionic robot such as a humanoid robot, a robot dog, a robot cat, etc., or a mechanical arm or a manipulator.
  • a bionic robot such as a humanoid robot, a robot dog, a robot cat, etc., or a mechanical arm or a manipulator.
  • the object recognition method provided by the embodiment of the present application is applied to a robot having a camera or a camera device that is communicatively connected to a camera, a camera, etc., and a program for executing the object recognition method is deployed on the robot side,
  • the object recognition method can also be executed when the robot is in an offline state.
  • an object recognition method provided by an embodiment of the present application includes:
  • step S101 a target image is acquired, and the image quality of the target image is detected.
  • the target image may be an image obtained by an image detection device.
  • the image detection device may be a camera, and a video may be captured by the camera, and the target image may be obtained from the captured video stream.
  • obtain the target image by receiving the video or image sent by the external device, obtain the target image, and detect the image quality of the target image to detect whether the image quality of the target image meets the preset requirements.
  • image quality characteristics such as sharpness, chromaticity and brightness of the image.
  • the detecting the image quality of the target image includes: detecting whether the sharpness, chromaticity and brightness of the target image are within respective corresponding preset normal ranges; When both the chromaticity and the luminance are within their corresponding preset normal ranges, it is determined that the target image satisfies the first preset condition.
  • the method further includes: when the brightness is not within the preset normal brightness range and is within the preset normal brightness range
  • the target image is processed through a high dynamic range imaging algorithm, so that the processed target image satisfies the first preset condition.
  • the normal brightness range and the preset processing brightness range that can be adjusted by the high dynamic range imaging algorithm are preset in advance.
  • the target image is detected by the high dynamic range imaging algorithm. The image is processed, and the brightness of the processed target image is determined to meet the preset normal brightness range.
  • the preprocessing of the target image includes steps S201 to S201 S203.
  • Step S201 converting the target image to a target color gamut.
  • converting the target image to the target color gamut may be converting the target image to an image in RGB format.
  • the target image can be converted to an image in YCbCr format or an image in HSV format according to practical applications.
  • Step S202 determining the center position of the target image, and cropping according to the center position with a preset ratio.
  • the center position of the target image is determined, and based on the center position of the target image, a preset ratio is cut out, so as to cut out the center area of the image according to a certain ratio.
  • the robot performs target recognition, the robot only cares about the object located in the center of the robot's field of vision, and cuts out the center area of the image at a preset ratio based on the center position of the target image. Irrelevant background can improve the accuracy of object recognition.
  • Step S203 scaling the cropped target image to a preset size according to a preset image scaling algorithm.
  • the cropped target image is scaled to a preset size according to a preset image scaling algorithm, so as to adjust the size of the image to an image size that can be processed in subsequent steps
  • the preset image scaling algorithm may be an image based on interpolation Scaling algorithms, such as nearest neighbor interpolation, linear interpolation, quadratic interpolation, or Gaussian interpolation, etc.
  • Step S102 when the image quality of the target image satisfies the first preset condition, perform object recognition on the target image to obtain at least one candidate frame, a category of objects in each candidate frame, and each candidate frame. Confidence for the class of the object in the box.
  • step S101 and subsequent steps are executed for the next image, and if the acquired data is video stream data, the current image is not processed Subsequent steps are processed, and step S101 and its subsequent steps are returned to the next frame of image.
  • Perform object recognition on the target image and when at least one object is recognized, obtain at least one candidate frame, the category of the object in each candidate frame, and the confidence level of the category of the object in each candidate frame; When the object is recognized, continue to perform step S101 and its subsequent steps for the next image
  • the image quality of the target image satisfies a first preset condition
  • the confidence of the category of the object in each candidate frame includes: when the image quality of the target image satisfies the first preset condition, inputting the target image into the trained neural network model for object recognition , obtain at least one candidate frame, the category of the object in each candidate frame, and the confidence level of the category of each object in the candidate frame.
  • a neural network model can be pre-built for training, and object recognition can be performed on the input target image according to the neural network model after training, and a candidate frame corresponding to the identified object, the category of the object in each candidate frame, and each candidate frame can be obtained. Confidence for the class of the object in the box.
  • the network design can be carried out through a lightweight network, so that it can be deployed on the terminal device that executes the target recognition algorithm, so that object recognition can also be performed in an offline state.
  • the above neural network training process may be to prepare a large number of sample images containing the various types of objects according to the various object types to be identified, and each sample image includes a candidate frame and a candidate frame corresponding to the object to be classified and labeled.
  • the object category it belongs to and the confidence level of the object category in each candidate box A large number of sample images are prepared to train the neural network model until the preset loss function of the neural network model converges, and the neural network model is determined to be a trained neural network model.
  • the confidence level of the category of the object is used to characterize the degree of confidence that the category of the object in the candidate frame is the real category of the object. The probability of describing the true class of the object.
  • Step S103 obtaining candidate frames with a confidence level of the category of the object greater than a first preset threshold from all the candidate frames, to obtain a first target candidate frame.
  • the candidate frame culling of the first preset threshold can improve the accuracy of object recognition.
  • Step S104 filtering the first target candidate frames whose object categories are the same category to obtain second target candidate frames; wherein, the categories of the objects in each of the two obtained second target candidate frames are different.
  • the non-maximum value suppression algorithm can be used to filter the first target candidate frame whose object category is the same category, and filter out the first target candidate frame whose confidence level is not the maximum value in the first target candidate frame whose object category belongs to the same category.
  • Target candidate frame, the first candidate frame remaining after filtering is called the second target candidate frame, so as to obtain the second target candidate frame, so that the candidate frame of the same category of the object retains the candidate frame with the maximum confidence, which can be Improve the accuracy of target recognition.
  • the filtering of the first target candidate frame with the same category of objects to obtain the second target candidate frame includes: filtering the category of the object in the first target candidate frame with the same category of objects in the first target candidate frame The confidence of the non-highest first target candidate frame is obtained, and the second target candidate frame is obtained.
  • Step S105 Calculate the coincidence degree for all the second target candidate frames, and acquire one of the two second target candidate frames whose coincidence degree is greater than a second preset threshold to obtain a third target candidate frame.
  • the obtained second candidate frames belong to different categories, calculate the coincidence degree between each two target candidate frames for all the second target candidate frames, and calculate all the coincidence degrees greater than One of the two second target candidate frames of the second preset threshold is eliminated, so as to prevent repeated recognition and improve the accuracy of recognition, and one of the two second target candidate frames whose coincidence degree is greater than the second preset threshold is eliminated. All subsequent second target candidate frames are called third target candidate frames.
  • the coincidence degree is calculated for all the second target candidate frames in pairs, and two second target candidate frames whose coincidence degree is greater than a second preset threshold are acquired One, obtain the third target candidate frame, including steps S1051 to S1052:
  • Step S1051 Calculate the intersection ratio of all the second target candidate frames pairwise to obtain the degree of coincidence between all the second target candidate frames.
  • intersection over union can represent the degree of overlap between two candidate frame regions, and the intersection ratio is calculated between all the second candidate frames to obtain all the second candidate frames.
  • Step S1052 Eliminate one of the two second target candidate frames with a degree of coincidence greater than a second preset threshold to obtain a third target candidate frame.
  • the candidate frame is called the third target candidate frame.
  • the pre-stored common degree of the target object can be used to remove the two second target candidate frames whose coincidence degree is greater than the second threshold and the second target candidate frame with low object common degree, or the two second target candidate frames whose coincidence degree is greater than the second threshold can be eliminated.
  • the second target candidate frame corresponding to the second target candidate frame with low confidence is eliminated.
  • removing one of the two second target candidate frames with a degree of coincidence greater than a second preset threshold to obtain a third candidate frame of the target includes: obtaining a degree of coincidence greater than the second preset threshold The categories of the objects in the two second target candidate frames; the common degree of the categories of the objects in the two second target candidate frames whose coincidence degree is greater than the second preset threshold value satisfies the second preset condition.
  • the target candidate frame is obtained, and the third target candidate frame is obtained.
  • Eliminating the two objects in the second target candidate frame with a degree of coincidence greater than a second preset threshold whose commonness of the categories of objects in the second target candidate frame satisfies the second preset condition may be: Eliminating two of the objects in the second target candidate frame whose degree of coincidence is greater than the second preset threshold may be: The second target candidate frame of the category of the object in the two-target candidate frame is less common.
  • Step S106 when a plurality of the third target candidate frames are obtained, obtain the category of the object in the third target candidate frame with the shortest distance between the center position and the preset center position of the target image, and obtain the target image The recognition results of the objects in and output.
  • the center of the third candidate frame can be obtained.
  • the coordinates of the position in the target image, and then the coordinates of the preset center position of the target image are obtained, the distance between the center of the third target candidate frame and the center of the target image is sorted according to the distance value from small to large, and the distance from the target is output.
  • the closest object category information and position information in the image center is obtained.
  • the degree of coincidence is calculated for all the second target candidate frames, and one of the two candidate frames of the second target whose degree of coincidence is greater than a second preset threshold is obtained to obtain a third target
  • the method further includes: when a third target candidate frame is obtained, obtaining a category of an object in the third target candidate frame, obtaining and outputting a recognition result of the object in the target image.
  • candidate frames with the confidence of the category of the object greater than the first preset threshold can be obtained from all the candidate frames, which can reduce the misjudgment and recognition of the background being mistakenly identified as an object, and the first target of the same category can be reduced.
  • the candidate frame is filtered, and the second target candidate frames of different categories are filtered according to the degree of coincidence to obtain the third target candidate frame, and then the third target candidate frame with the shortest distance between the center position and the preset center position of the target image is obtained.
  • the category of the object in the target candidate frame is obtained to obtain the recognition result, which can improve the accuracy and stability of the target object recognition.
  • the embodiments of the present application further provide an object recognition apparatus, which is configured to perform the steps in the above-mentioned embodiments of the object recognition method.
  • the object identification device may be a virtual appliance (virtual appliance) in the terminal device, run by the processor of the terminal device, or may be the terminal device itself.
  • the object recognition apparatus 400 provided by the embodiment of the present application includes:
  • a detection module 401 configured to acquire a target image and detect the image quality of the target image
  • the identification module 402 is configured to perform object identification on the target image when the image quality of the target image satisfies the first preset condition, and obtain at least one candidate frame, a category of objects in each candidate frame, and each the confidence of the category of the object in the candidate frame;
  • a first obtaining module 403, configured to obtain a candidate frame whose confidence of the category of the object is greater than a first preset threshold from all the candidate frames, to obtain a first target candidate frame;
  • the filtering module 404 is used to filter the first target candidate frame whose category of the object is the same category, to obtain a second target candidate frame; wherein, the categories of objects between each two obtained second target candidate frames are different;
  • the second obtaining module 405 is configured to calculate the coincidence degree of all the second target candidate frames in pairs, and obtain one of the two second target candidate frames whose coincidence degree is greater than a second preset threshold, and obtain a third target candidate frame;
  • the third obtaining module 406 is configured to obtain the category of the object in the third target candidate frame with the shortest distance between the center position and the preset center position of the target image when a plurality of the third target candidate frames are obtained, The recognition result of the object in the target image is obtained and output.
  • the second obtaining module 405 includes:
  • a computing unit configured to calculate the intersection ratio for all the second target candidate frames, to obtain the degree of coincidence between all the second target candidate frames
  • a culling unit configured to cull one of the two second target candidate frames with a degree of coincidence greater than a second preset threshold, to obtain a third target candidate frame.
  • the culling unit includes:
  • an acquisition subunit configured to acquire the categories of objects in the two second target candidate frames whose coincidence degree is greater than a second preset threshold
  • a culling subunit used for culling the second target candidate frame whose category of objects in the two second target candidate frames whose coincidence degree is greater than the second preset threshold value satisfies the second preset condition, to obtain a third target candidate frame frame.
  • the filtering module 404 is specifically configured to:
  • the identifying module 402 is specifically configured to:
  • the object recognition device 400 includes:
  • a fourth acquisition module configured to acquire a category of an object in the third target candidate frame when a third target candidate frame is obtained after the second acquisition module is triggered, and obtain an object in the target image recognition results and output.
  • the detection module 401 further includes:
  • a detection unit configured to detect whether the clarity, chromaticity and brightness of the target image are within the respective preset normal ranges
  • a determination unit configured to determine that the target image satisfies a first preset condition when the sharpness, the chromaticity and the brightness are all within their corresponding preset normal ranges.
  • the object recognition apparatus 400 further includes:
  • the processing module is configured to process the target image through a high dynamic range imaging algorithm when the brightness is not within the preset normal brightness range and within the preset processing brightness range, so that the processed target image satisfies the first a preset condition.
  • the candidate frames with the confidence of the category of the object greater than the first preset threshold are obtained from all the candidate frames, which can reduce the false identification of the background as an object, and the first target candidate of the same category can be reduced.
  • the category of the object in the candidate frame is obtained to obtain the recognition result, which can improve the accuracy and stability of the target object recognition.
  • an embodiment of the present invention further provides a terminal device 500 including: a processor 501, a memory 502, and a computer program 503 stored in the memory 502 and running on the processor 501, such as object recognition programs.
  • a terminal device 500 including: a processor 501, a memory 502, and a computer program 503 stored in the memory 502 and running on the processor 501, such as object recognition programs.
  • the processor 501 executes the computer program 503 the steps in each of the above embodiments of the object recognition method are implemented.
  • the processor 501 executes the computer program 503, the functions of the modules in the above-mentioned apparatus embodiments, for example, the functions of the modules 401 to 406 shown in FIG. 4 are realized.
  • the computer program 503 may be divided into one or more modules, and the one or more modules are stored in the memory 502 and executed by the processor 501 to complete the present invention.
  • the one or more modules may be a series of computer program instruction segments capable of accomplishing specific functions, and the instruction segments are used to describe the execution process of the computer program 503 in the terminal device 500 .
  • the computer program 503 can be divided into a detection module, an identification module, a first acquisition module, a filter module, a second acquisition module, and a third acquisition module. The specific functions of each module have been described in the above embodiments, and here No longer.
  • the terminal device 500 may be a robot, or a computing device such as a desktop computer, a notebook computer, a palmtop computer, and a cloud server.
  • the terminal device may include, but is not limited to, the processor 501 and the memory 502 .
  • FIG. 5 is only an example of the terminal device 500, and does not constitute a limitation on the terminal device 500. It may include more or less components than the one shown, or combine some components, or different components
  • the terminal device may further include an input and output device, a network access device, a bus, and the like.
  • the so-called processor 501 may be a central processing unit (Central Processing Unit, CPU), or other general-purpose processors, digital signal processors (Digital Signal Processors, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
  • the memory 502 may be an internal storage unit of the terminal device 500 , such as a hard disk or a memory of the terminal device 500 .
  • the memory 502 may also be an external storage device of the terminal device 500, such as a plug-in hard disk, a smart memory card (Smart Media Card, SMC), a secure digital (Secure Digital, SD) equipped on the terminal device 500 card, Flash Card, etc.
  • the memory 502 may also include both an internal storage unit of the terminal device 500 and an external storage device.
  • the memory 502 is used for storing the computer program and other programs and data required by the terminal device.
  • the memory 502 can also be used to temporarily store data that has been output or is to be output.
  • the disclosed apparatus/terminal device and method may be implemented in other manners.
  • the apparatus/terminal device embodiments described above are only illustrative.
  • the division of the modules or units is only a logical function division. In actual implementation, there may be other division methods, such as multiple units. Or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented.
  • the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units may be implemented in the form of hardware, or may be implemented in the form of software functional units.
  • the integrated modules if implemented in the form of software functional units and sold or used as independent products, may be stored in a computer-readable storage medium.
  • the present invention can implement all or part of the processes in the methods of the above embodiments, and can also be completed by instructing relevant hardware through a computer program, and the computer program can be stored in a computer-readable storage medium.
  • the computer program includes computer program code, and the computer program code may be in the form of source code, object code, executable file or some intermediate form, and the like.
  • the computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, U disk, removable hard disk, magnetic disk, optical disk, computer memory, read-only memory (ROM, Read-Only Memory) , Random Access Memory (RAM, Random Access Memory), electric carrier signal, telecommunication signal and software distribution medium, etc. It should be noted that the content contained in the computer-readable media may be appropriately increased or decreased according to the requirements of legislation and patent practice in the jurisdiction, for example, in some jurisdictions, according to legislation and patent practice, the computer-readable media Electric carrier signals and telecommunication signals are not included.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Geometry (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The present application is applicable to the technical field of artificial intelligence. Provided are an object recognition method and apparatus, and a terminal device and a storage medium. The method comprises: performing object recognition on a target image, so as to obtain at least one candidate box, and the category of an object in each candidate box and the confidence coefficient of the category thereof; acquiring candidate boxes in which the confidence coefficients of the categories of the objects are greater than a first pre-set threshold value, so as to obtain first target candidate boxes; filtering the first target candidate boxes in which the categories of the objects are the same, so as to obtain second target candidate boxes; pairwise calculating overlap ratios of all the second target candidate boxes, and acquiring one of two second target candidate boxes, the overlap ratio of which is greater than a second pre-set threshold value, so as to obtain third target candidate boxes; and acquiring the category of an object in the third target candidate box that has the shortest distance between a central position and a pre-set central position of the target image, so as to obtain a recognition result of an object in the target image, and outputting same. By means of the embodiments of the present application, the accuracy and stability of recognizing a target object can be improved.

Description

一种物体识别方法、装置、终端设备及存储介质Object recognition method, device, terminal device and storage medium
本申请要求于2020年10月21日在中国专利局提交的、申请号为202011130384.0的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of the Chinese Patent Application No. 202011130384.0 filed with the Chinese Patent Office on October 21, 2020, the entire contents of which are incorporated herein by reference.
技术领域technical field
本申请属于人工智能技术领域,尤其涉及一种物体识别方法、装置、终端设备及存储介质。The present application belongs to the technical field of artificial intelligence, and in particular, relates to an object recognition method, device, terminal device and storage medium.
背景技术Background technique
随着人工智能技术的快速发展,各种人工智能产品顺应而生,人工智能产品可对特定的物体进行物体识别,以识别出物体的类别和位置信息。With the rapid development of artificial intelligence technology, a variety of artificial intelligence products have been developed, and artificial intelligence products can perform object recognition on specific objects to identify the category and location information of the object.
目前基于深度学习的物体识别大多数采用分类算法,容易把背景误识别成物体以及识别出画面中的多个物体,不能准确出的确定要识别目标物体,使得对目标物体识别的准确性和稳定性不高。At present, most object recognition based on deep learning adopts classification algorithm, which is easy to misrecognize the background as an object and recognize multiple objects in the screen. Sex is not high.
技术问题technical problem
本申请实施例提供了一种物体识别方法、装置、终端设备及存储介质,旨在解决现有对目标物体识别的准确性和稳定性不高的问题。Embodiments of the present application provide an object recognition method, apparatus, terminal device and storage medium, which aim to solve the problems of low accuracy and stability of existing target object recognition.
技术解决方案technical solutions
第一方面,本申请实施例提供了一种物体识别方法,包括:In a first aspect, an embodiment of the present application provides an object recognition method, including:
获取目标图像,检测所述目标图像的图像质量;acquiring a target image, and detecting the image quality of the target image;
当所述目标图像的图像质量满足第一预设条件时,对所述目标图像进行物体识别,得到至少一个候选框、每个所述候选框中物体的类别以及每个所述候选框中物体的类别的置信度;When the image quality of the target image satisfies the first preset condition, perform object recognition on the target image to obtain at least one candidate frame, the category of the object in each candidate frame, and the object in each candidate frame the confidence level of the category;
从所有所述候选框中获取物体的类别的置信度大于第一预设阈值的候选框,得到第一目标候选框;Obtain a candidate frame whose confidence of the category of the object is greater than the first preset threshold from all the candidate frames, and obtain the first target candidate frame;
对物体的类别为同一类别的第一目标候选框进行过滤,得到第二目标候选框;其中,得到的每两个第二目标候选框中的物体的类别不同;Filtering the first target candidate frame whose object category is the same category to obtain a second target candidate frame; wherein, the categories of the objects in each of the two obtained second target candidate frames are different;
对所有所述第二目标候选框两两计算重合度,并获取重合度大于第二预设阈值的两个所述第二目标候选框中一个,得到第三目标候选框;Calculate the coincidence degree for all the second target candidate frames, and obtain one of the two second target candidate frames whose coincidence degree is greater than the second preset threshold, to obtain a third target candidate frame;
当得到多个所述第三目标候选框时,获取中心位置与所述目标图像的预设中心位置之间的距离最短的第三目标候选框中物体的类别,得到所述目标图像中物体的识别结果并输出。When a plurality of the third target candidate frames are obtained, the category of the object in the third target candidate frame with the shortest distance between the center position and the preset center position of the target image is obtained, and the object in the target image is obtained. Identify the results and output.
第二方面,本申请实施例提供了一种物体识别装置,包括:In a second aspect, an embodiment of the present application provides an object recognition device, including:
检测模块,用于获取目标图像,检测所述目标图像的图像质量;a detection module for acquiring a target image and detecting the image quality of the target image;
识别模块,用于当所述目标图像的图像质量满足第一预设条件时,对所述目标图像进行物体识别,得到至少一个候选框、每个所述候选框中物体的类别以及每个所述候选框中物体的类别的置信度;The recognition module is configured to perform object recognition on the target image when the image quality of the target image satisfies the first preset condition, and obtain at least one candidate frame, the category of the object in each candidate frame, and each selected frame. The confidence of the category of the object in the candidate box;
第一获取模块,用于从所有所述候选框中获取物体的类别的置信度大于第一预设阈值的候选框,得到第一目标候选框;a first obtaining module, configured to obtain a candidate frame of which the confidence of the category of the object is greater than a first preset threshold from all the candidate frames, to obtain a first target candidate frame;
过滤模块,用于对物体的类别为同一类别的第一目标候选框进行过滤,得到第二目标候选框;其中,得到的每两个第二目标候选框之间的物体的类别不同;The filtering module is used to filter the first target candidate frame whose object category is the same category to obtain the second target candidate frame; wherein, the categories of objects between each two obtained second target candidate frames are different;
第二获取模块,用于对所有所述第二目标候选框两两计算重合度,并获取重合度大于第二预设阈值的两个所述第二目标候选框中一个,得到第三目标候选框;The second obtaining module is configured to calculate the coincidence degree for all the second target candidate frames, and obtain one of the two second target candidate frames whose coincidence degree is greater than the second preset threshold, and obtain a third target candidate frame;
第三获取模块,用于当得到多个所述第三目标候选框时,获取中心位置与所述目标图像的预设中心位置之间的距离最短的第三目标候选框中物体的类别,得到所述目标图像中物体的识别结果并输出。The third obtaining module is configured to obtain the category of the object in the third target candidate frame with the shortest distance between the center position and the preset center position of the target image when a plurality of the third target candidate frames are obtained, and obtain The recognition result of the object in the target image is output.
第三方面,本申请实施例提供了一种终端设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现上述物体识别方法的步骤。In a third aspect, an embodiment of the present application provides a terminal device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, when the processor executes the computer program The steps of implementing the above object recognition method.
第四方面,本申请实施例提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,上述计算机程序被处理器执行时实现上述物体识别方法的步骤。In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the steps of the object recognition method are implemented.
第五方面,本申请实施例提供了一种计算机程序产品,当计算机程序产品在电子设备上运行时,使得电子设备执行上述现上述物体识别方法的步骤。In a fifth aspect, an embodiment of the present application provides a computer program product, which, when the computer program product runs on an electronic device, causes the electronic device to execute the steps of the above-mentioned object recognition method.
有益效果beneficial effect
本申请实施例与现有技术相比存在的有益效果是:本申请实施例可获取目标图像,检测所述目标图像的图像质量;当所述目标图像的图像质量满足第一预设条件时,对所述目标图像进行物体识别,得到至少一个候选框、每个所述候选框中物体的类别以及每个所述候选框中物体的类别的置信度;从所有所述候选框中获取物体的类别的置信度大于第一预设阈值的候选框,得到第一目标候选框;物体的类别为同一类别的第一目标候选框进行过滤,得到第二目标候选框;其中,得到的每两个第二目标候选框中的物体的类别不同;对所有所述第二目标候选框两两计算重合度,并获取重合度大于第二预设阈值的两个所述第二目标候选框中一个,得到第三目标候选框;当得到多个所述第三目标候选框时,获取中心位置与所述目标图像的预设中心位置之间的距离最短的第三目标候选框中物体的类别,得到所述目标图像中物体的识别结果并输出。由于从所有所述候选框中获取物体的类别的置信度大于第一预设阈值的候选框,可减少把背景误识别成物体的误判识别,且对同类别的第一目标候选框进行过滤,对不同类别的第二目标候选框,根据重合度进行过滤,得到第三目标候选框,再获取中心位置与所述目标图像的预设中心位置之间的距离最短的第三目标候选框中物体的类别,从而得到识别结果,可提高对目标物体识别的准确性和稳定性。Compared with the prior art, the embodiments of the present application have the following beneficial effects: the embodiments of the present application can acquire a target image and detect the image quality of the target image; when the image quality of the target image satisfies the first preset condition, Perform object recognition on the target image to obtain at least one candidate frame, the category of the object in each candidate frame, and the confidence level of the category of the object in each candidate frame; obtain the object's information from all the candidate frames. A candidate frame whose category confidence is greater than the first preset threshold value is obtained, and a first target candidate frame is obtained; the first target candidate frame whose category is the same category is filtered to obtain a second target candidate frame; wherein, every two target candidate frames obtained The categories of the objects in the second target candidate frame are different; the coincidence degree is calculated for all the second target candidate frames, and one of the two second target candidate frames whose coincidence degree is greater than the second preset threshold is obtained, Obtain a third target candidate frame; when multiple third target candidate frames are obtained, obtain the category of the object in the third target candidate frame with the shortest distance between the center position and the preset center position of the target image, and obtain The recognition result of the object in the target image is output. Because the confidence level of the category of the object is obtained from all the candidate boxes, the confidence level of the object category is greater than the first preset threshold, which can reduce the false identification of the background as an object, and filter the first target candidate box of the same category. , filter the second target candidate frames of different categories according to the degree of coincidence to obtain the third target candidate frame, and then obtain the third target candidate frame with the shortest distance between the center position and the preset center position of the target image The classification of the object can be obtained to obtain the recognition result, which can improve the accuracy and stability of the target object recognition.
可以理解的是,上述第二方面至第五方面的有益效果可以参见上述第一方面中的相关描述,在此不再赘述。It can be understood that, for the beneficial effects of the second aspect to the fifth aspect, reference may be made to the relevant description in the first aspect, which is not repeated here.
附图说明Description of drawings
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to illustrate the technical solutions in the embodiments of the present application more clearly, the following briefly introduces the accompanying drawings that need to be used in the description of the embodiments or the prior art. Obviously, the drawings in the following description are only for the present application. In some embodiments, for those of ordinary skill in the art, other drawings can also be obtained according to these drawings without any creative effort.
图1是本申请一实施例提供的物体识别方法的流程示意图;1 is a schematic flowchart of an object recognition method provided by an embodiment of the present application;
图2是本申请一实施例提供的物体识别方法的流程示意图FIG. 2 is a schematic flowchart of an object recognition method provided by an embodiment of the present application
图3是本申请一实施例提供的步骤S105的一个具体流程示意图;FIG. 3 is a specific schematic flowchart of step S105 provided by an embodiment of the present application;
图4是本申请一实施例提供的物体识别装置的结构示意图;4 is a schematic structural diagram of an object recognition device provided by an embodiment of the present application;
图5是本申请一实施例提供的终端设备的结构示意图。FIG. 5 is a schematic structural diagram of a terminal device provided by an embodiment of the present application.
本发明的实施方式Embodiments of the present invention
以下描述中,为了说明而不是为了限定,提出了诸如特定***结构、技术之类的具体细节,以便透彻理解本申请实施例。然而,本领域的技术人员应当清楚,在没有这些具体细节的其它实施例中也可以实现本申请。在其它情况中,省略对众所周知的***、装置、电路以及方法的详细说明,以免不必要的细节妨碍本申请的描述。In the following description, for the purpose of illustration rather than limitation, specific details such as a specific system structure and technology are set forth in order to provide a thorough understanding of the embodiments of the present application. However, it will be apparent to those skilled in the art that the present application may be practiced in other embodiments without these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
应当理解,当在本申请说明书和所附权利要求书中使用时,术语“包括”指示所描述特征、整体、步骤、操作、元素和/或组件的存在,但并不排除一个或多个其它特征、整体、步骤、操作、元素、组件和/或其集合的存在或添加。It is to be understood that, when used in this specification and the appended claims, the term "comprising" indicates the presence of the described feature, integer, step, operation, element and/or component, but does not exclude one or more other The presence or addition of features, integers, steps, operations, elements, components and/or sets thereof.
还应当理解,在本申请说明书和所附权利要求书中使用的术语“和/或”是指相关联列出的项中的一个或多个的任何组合以及所有可能组合,并且包括这些组合。It will also be understood that, as used in this specification and the appended claims, the term "and/or" refers to and including any and all possible combinations of one or more of the associated listed items.
如在本申请说明书和所附权利要求书中所使用的那样,术语“如果”可以依据上下文被解释为“当...时”或“一旦”或“响应于确定”或“响应于检测到”。类似地,短语“如 果确定”或“如果检测到[所描述条件或事件]”可以依据上下文被解释为意指“一旦确定”或“响应于确定”或“一旦检测到[所描述条件或事件]”或“响应于检测到[所描述条件或事件]”。As used in the specification of this application and the appended claims, the term "if" may be contextually interpreted as "when" or "once" or "in response to determining" or "in response to detecting ". Similarly, the phrases "if it is determined" or "if the [described condition or event] is detected" may be interpreted, depending on the context, to mean "once it is determined" or "in response to the determination" or "once the [described condition or event] is detected. ]" or "in response to detection of the [described condition or event]".
另外,在本申请说明书和所附权利要求书的描述中,术语“第一”、“第二”、“第三”等仅用于区分描述,而不能理解为指示或暗示相对重要性。In addition, in the description of the specification of the present application and the appended claims, the terms "first", "second", "third", etc. are only used to distinguish the description, and should not be construed as indicating or implying relative importance.
在本申请说明书中描述的参考“一个实施例”或“一些实施例”等意味着在本申请的一个或多个实施例中包括结合该实施例描述的特定特征、结构或特点。由此,在本说明书中的不同之处出现的语句“在一个实施例中”、“在一些实施例中”、“在其他一些实施例中”、“在另外一些实施例中”等不是必然都参考相同的实施例,而是意味着“一个或多个但不是所有的实施例”,除非是以其他方式另外特别强调。术语“包括”、“包含”、“具有”及它们的变形都意味着“包括但不限于”,除非是以其他方式另外特别强调。References in this specification to "one embodiment" or "some embodiments" and the like mean that a particular feature, structure or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," "in other embodiments," etc. in various places in this specification are not necessarily All refer to the same embodiment, but mean "one or more but not all embodiments" unless specifically emphasized otherwise. The terms "including", "including", "having" and their variants mean "including but not limited to" unless specifically emphasized otherwise.
本申请实施例提供的物体识别方法,可以应用于机器人、相机、手机、平板电脑、可穿戴设备、增强现实(augmented reality,AR)/虚拟现实(virtual reality,VR)设备、笔记本电脑、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本、个人数字助理(personal digital assistant,PDA)等终端设备,本申请实施例对终端设备的具体类型不作任何限制。摄像头可以是可转动、可自动对焦的摄像头,例如,兼具可转动和可自动聚焦功能的云台摄像头、球形摄像头等。机器人可以是服务机器人、娱乐机器人、军用机器人、农业机器人等,其中,服务机器人和娱乐机器人具体可以是人形机器人、机器狗、机器猫等仿生机器人,也可以是机械臂或机械手,本申请实施例对机器人的具体类型不作任何限制。The object recognition method provided by the embodiments of the present application can be applied to robots, cameras, mobile phones, tablet computers, wearable devices, augmented reality (AR)/virtual reality (VR) devices, notebook computers, super mobile devices For terminal devices such as a personal computer (ultra-mobile personal computer, UMPC), a netbook, and a personal digital assistant (personal digital assistant, PDA), the embodiments of the present application do not impose any restrictions on the specific type of the terminal device. The camera can be a rotatable and auto-focusable camera, for example, a pan-tilt camera with both rotatable and auto-focusing functions, a spherical camera, and the like. The robot can be a service robot, an entertainment robot, a military robot, an agricultural robot, etc., wherein the service robot and the entertainment robot can be a bionic robot such as a humanoid robot, a robot dog, a robot cat, etc., or a mechanical arm or a manipulator. The embodiment of the present application There is no restriction on the specific type of robot.
如在一种应用场景下,本申请实施例提供的物体识别方法应用于具有摄像头或与摄像头、相机等摄像设备通信连接的机器人,且执行所述物体识别方法的程序部署于所述机器人端,以使所述机器人处于离线状态时同样能执行所述物体识别方法。For example, in an application scenario, the object recognition method provided by the embodiment of the present application is applied to a robot having a camera or a camera device that is communicatively connected to a camera, a camera, etc., and a program for executing the object recognition method is deployed on the robot side, The object recognition method can also be executed when the robot is in an offline state.
为了说明本申请所述的技术方案,下面通过以下实施例来进行说明。In order to illustrate the technical solutions described in this application, the following examples are used to illustrate.
请参阅图1,本申请实施例提供的一种物体识别方法,包括:Referring to FIG. 1, an object recognition method provided by an embodiment of the present application includes:
步骤S101,获取目标图像,检测所述目标图像的图像质量。In step S101, a target image is acquired, and the image quality of the target image is detected.
具体的,所述目标图像可以是通过图像检测设备获取得到的图像,如所述图像检测设备可以是摄像头,可通过摄像头拍摄视频,从拍摄的视频流中获取目标图像。或者通过接收外部设备发送的视频或图像中获取到目标图像,获取到目标图像,对所述目标图像的图像质量进行检测,以检测目标图像的图像质量是否满足预设的要求,所述图像质量包括但不限于图像的清晰度、色度和亮度等图像质量特征。Specifically, the target image may be an image obtained by an image detection device. For example, the image detection device may be a camera, and a video may be captured by the camera, and the target image may be obtained from the captured video stream. Or obtain the target image by receiving the video or image sent by the external device, obtain the target image, and detect the image quality of the target image to detect whether the image quality of the target image meets the preset requirements. Including but not limited to image quality characteristics such as sharpness, chromaticity and brightness of the image.
在一个实施例中,所述检测所述目标图像的图像质量,包括:检测所述目标图像的清晰度、色度和亮度是否在各自对应的预设正常范围内;当所述清晰度、所述色度和所述亮度均在各自对应的预设正常范围内时,判定所述目标图像满足第一预设条件。预先分别设置正常清晰度范围、正常色度范围和正常亮度范围,分别检测目标图像的清晰度是否在预设的正常清晰度范围内、检测目标图像的色度是否在预设的正常色度范围内、以及检测目标图像的亮度是否在预设的正常亮度范围内。当目标图像的清晰度,亮度和色度均在各自预设的正常范围内时,判定目标图像是满足第一预设条件。In one embodiment, the detecting the image quality of the target image includes: detecting whether the sharpness, chromaticity and brightness of the target image are within respective corresponding preset normal ranges; When both the chromaticity and the luminance are within their corresponding preset normal ranges, it is determined that the target image satisfies the first preset condition. Set the normal definition range, normal chromaticity range and normal luminance range respectively in advance, respectively detect whether the clarity of the target image is within the preset normal definition range, and detect whether the chromaticity of the target image is within the preset normal chromaticity range. and detecting whether the brightness of the target image is within the preset normal brightness range. When the definition, brightness and chromaticity of the target image are within the respective preset normal ranges, it is determined that the target image satisfies the first preset condition.
在一个实施例中,所述检测所述目标图像的清晰度、色度和亮度是否在各自对应的预设正常范围内之后,还包括:当所述亮度不在预设正常亮度范围且在预设处理亮度范围内时,通过高动态范围成像算法对所述目标图像进行处理,以使处理后的所述目标图像满足第一预设条件。预先设置正常亮度范围和处于能使用高动态范围成像算法调整的预设处理亮度范围,当目标图像的亮度不在预设正常范围内但处于预设处理范围内时,通过高动态范围成像算法对目标图像进行处理,将处理后的目标图像的亮度判定为满足预设正常亮度范围。In one embodiment, after the detecting whether the definition, chromaticity and brightness of the target image are within the respective corresponding preset normal ranges, the method further includes: when the brightness is not within the preset normal brightness range and is within the preset normal brightness range When the processing brightness is within the range, the target image is processed through a high dynamic range imaging algorithm, so that the processed target image satisfies the first preset condition. The normal brightness range and the preset processing brightness range that can be adjusted by the high dynamic range imaging algorithm are preset in advance. When the brightness of the target image is not within the preset normal range but within the preset processing range, the target image is detected by the high dynamic range imaging algorithm. The image is processed, and the brightness of the processed target image is determined to meet the preset normal brightness range.
在一个实施例中,在检测所述目标图像是否满足第一预设条件之前,包括对所述目标 图像进行预处理,如图2所示,对所述目标图像进行预处理包括步骤S201至步骤S203。In one embodiment, before detecting whether the target image satisfies the first preset condition, it includes preprocessing the target image. As shown in FIG. 2 , the preprocessing of the target image includes steps S201 to S201 S203.
步骤S201,将所述目标图像转换至目标色域。Step S201, converting the target image to a target color gamut.
具体的,将所述目标图像转换至目标色域可以是将目标图像转换至RGB格式的图像。当然可根据实际应用将目标图像转换至YCbCr格式的图像,或者HSV格式的图像。Specifically, converting the target image to the target color gamut may be converting the target image to an image in RGB format. Of course, the target image can be converted to an image in YCbCr format or an image in HSV format according to practical applications.
步骤S202,确定所述目标图像的中心位置,并根据所述中心位置以预设比例进行剪裁。Step S202, determining the center position of the target image, and cropping according to the center position with a preset ratio.
具体的,确定出所述目标图像的中心位置,并基于所述目标图像的中心位置以预设比例进行剪裁,以将图像的中心区域按一定比例裁剪出来,以预设比例进行剪裁可以是预设图像宽与高的比,根据该宽与高的比对目标图像进行剪裁,或者预设上下左右的像素值,基于所述目标图像的中心位置和上下左右的像素值进行剪裁。在一种应用场景中如是机器人进行目标识别,机器人只关心位于机器人视野中心的物体,基于所述目标图像的中心位置以预设比例进行剪裁,以将图像的中心区域按一定比例裁剪出来,去除无关的背景,可提高物体识别的准确率。Specifically, the center position of the target image is determined, and based on the center position of the target image, a preset ratio is cut out, so as to cut out the center area of the image according to a certain ratio. Set the ratio of image width to height, and crop the target image according to the ratio of width to height, or preset pixel values of up, down, left, and right, and perform cropping based on the center position of the target image and the pixel values of up, down, left, and right. In an application scenario, if the robot performs target recognition, the robot only cares about the object located in the center of the robot's field of vision, and cuts out the center area of the image at a preset ratio based on the center position of the target image. Irrelevant background can improve the accuracy of object recognition.
步骤S203,根据预设图像缩放算法将剪裁后的目标图像缩放至预设尺寸。Step S203, scaling the cropped target image to a preset size according to a preset image scaling algorithm.
具体的,将裁剪后的目标图像根据预设图像缩放算法缩放至预设尺寸,以将图像的大小调整成后续步骤能处理的图像大小尺寸,所述预设图像缩放算法可以是基于插值的图像缩放算法,如最近邻插值算法、线性插值算法、二次插值算法或高斯插值算法等。Specifically, the cropped target image is scaled to a preset size according to a preset image scaling algorithm, so as to adjust the size of the image to an image size that can be processed in subsequent steps, and the preset image scaling algorithm may be an image based on interpolation Scaling algorithms, such as nearest neighbor interpolation, linear interpolation, quadratic interpolation, or Gaussian interpolation, etc.
步骤S102,当所述目标图像的图像质量满足第一预设条件时,对所述目标图像进行物体识别,得到至少一个候选框、每个所述候选框中物体的类别以及每个所述候选框中物体的类别的置信度。Step S102, when the image quality of the target image satisfies the first preset condition, perform object recognition on the target image to obtain at least one candidate frame, a category of objects in each candidate frame, and each candidate frame. Confidence for the class of the object in the box.
具体的,当目标图像的图像质量满足第一预设条件时,则对目标图像进行物体识别。当目标图像的图像质量不满足第一预设条件时,不对目标图像进行物体识别,对下一个图像返回执行步骤S101及其后续步骤,如获取的数据是视频流数据,则对当前图像不进行后续步骤处理,对下一帧图像返回执行步骤S101及其后续步骤。对所述目标图像进行物体识别,当识别出至少一个物体时,得到至少一个候选框、每个所述候选框中物体的类别以及每个所述候选框中物体的类别的置信度;当未识别出物体时,继续对下一个图像返回执行步骤S101及其后续步骤Specifically, when the image quality of the target image satisfies the first preset condition, the object recognition is performed on the target image. When the image quality of the target image does not meet the first preset condition, no object recognition is performed on the target image, and step S101 and subsequent steps are executed for the next image, and if the acquired data is video stream data, the current image is not processed Subsequent steps are processed, and step S101 and its subsequent steps are returned to the next frame of image. Perform object recognition on the target image, and when at least one object is recognized, obtain at least one candidate frame, the category of the object in each candidate frame, and the confidence level of the category of the object in each candidate frame; When the object is recognized, continue to perform step S101 and its subsequent steps for the next image
在一个实施例中,所述当所述目标图像的图像质量满足第一预设条件时,对所述目标图像进行物体识别,得到至少一个候选框、每个所述候选框中物体的类别以及每个所述候选框中物体的类别的置信度,包括:在所述目标图像的图像质量满足所述第一预设条件时,将所述目标图像输入至已训练的神经网络模型进行物体识别,得到至少一个候选框、每个所述候选框中物体的类别以及每个所述候选框中物体的类别的置信度。In one embodiment, when the image quality of the target image satisfies a first preset condition, perform object recognition on the target image to obtain at least one candidate frame, a category of objects in each candidate frame, and The confidence of the category of the object in each candidate frame includes: when the image quality of the target image satisfies the first preset condition, inputting the target image into the trained neural network model for object recognition , obtain at least one candidate frame, the category of the object in each candidate frame, and the confidence level of the category of each object in the candidate frame.
具体的,可预先构建神经网络模型进行训练,根据训练完成后的神经网络模型对输入的目标图形进行物体识别,得到识别出物体对应的候选框,每个候选框中物体的类别以及每个候选框中物体的类别的置信度。如可通过轻量级的网络进行网络设计,从而能部署在执行目标识别算法的终端设备上,以在离线状态时亦可进行物体识别。上述神经网络训练的过程可以是,根据需要识别的多种物体类型准备大量的包含所述多种类型物体的样本图像,每个样本图片中包括对物体进行分类标注物体对应的候选框、候选框所属的物体类别以及每个候选框中物体的类别的置信度。将准备大量的样本图像对神经网络模型进行训练,直至神经网络模型的预设损失函数收敛为止,判定神经网络模型为已经训练好的神经网络模型。物体的类别的置信度用于表征所述候选框中物体的类别为所述真实的该物体的类别的可信程度,如物体的类别的置信度可表征所述候选框中物体的类别为所述真实的该物体的类别的概率。Specifically, a neural network model can be pre-built for training, and object recognition can be performed on the input target image according to the neural network model after training, and a candidate frame corresponding to the identified object, the category of the object in each candidate frame, and each candidate frame can be obtained. Confidence for the class of the object in the box. For example, the network design can be carried out through a lightweight network, so that it can be deployed on the terminal device that executes the target recognition algorithm, so that object recognition can also be performed in an offline state. The above neural network training process may be to prepare a large number of sample images containing the various types of objects according to the various object types to be identified, and each sample image includes a candidate frame and a candidate frame corresponding to the object to be classified and labeled. The object category it belongs to and the confidence level of the object category in each candidate box. A large number of sample images are prepared to train the neural network model until the preset loss function of the neural network model converges, and the neural network model is determined to be a trained neural network model. The confidence level of the category of the object is used to characterize the degree of confidence that the category of the object in the candidate frame is the real category of the object. The probability of describing the true class of the object.
步骤S103,从所有所述候选框中获取物体的类别的置信度大于第一预设阈值的候选框,得到第一目标候选框。Step S103 , obtaining candidate frames with a confidence level of the category of the object greater than a first preset threshold from all the candidate frames, to obtain a first target candidate frame.
具体的,物体的类别的置信度越大,表示识别出对应物体类别的准确性越高,对应将背景识别成物体的可能性就越小,将所述候选框中对应物体的类别小于或等于第一预设阈 值的候选框剔除,可提高物体识别的准确性。Specifically, the greater the confidence level of the object category, the higher the accuracy of identifying the corresponding object category, and the smaller the possibility of identifying the background as an object, and the category of the corresponding object in the candidate frame is less than or equal to The candidate frame culling of the first preset threshold can improve the accuracy of object recognition.
步骤S104,对物体的类别为同一类别的第一目标候选框进行过滤,得到第二目标候选框;其中,得到的每两个第二目标候选框中的物体的类别不同。Step S104 , filtering the first target candidate frames whose object categories are the same category to obtain second target candidate frames; wherein, the categories of the objects in each of the two obtained second target candidate frames are different.
具体的,可通过非极大值抑制算法对物体的类别为同一类别的第一目标候选框进行过滤,过滤掉物体所属类别为同一类别的第一目标候选框中置信度不是最大值的第一目标候选框,将过滤后剩下的第一候选框称为第二目标候选框,从而得到第二目标候选框,使得物体所属类别为同一类别的候选框保留置信度最大值的候选框,可以提高目标识别的准确率。Specifically, the non-maximum value suppression algorithm can be used to filter the first target candidate frame whose object category is the same category, and filter out the first target candidate frame whose confidence level is not the maximum value in the first target candidate frame whose object category belongs to the same category. Target candidate frame, the first candidate frame remaining after filtering is called the second target candidate frame, so as to obtain the second target candidate frame, so that the candidate frame of the same category of the object retains the candidate frame with the maximum confidence, which can be Improve the accuracy of target recognition.
在一个实施例中,所述对物体的类别为同一类别的第一目标候选框进行过滤,得到第二目标候选框,包括:过滤物体的类别为同一类别的第一目标候选框中物体的类别的置信度非最高的第一目标候选框,得到第二目标候选框。In one embodiment, the filtering of the first target candidate frame with the same category of objects to obtain the second target candidate frame includes: filtering the category of the object in the first target candidate frame with the same category of objects in the first target candidate frame The confidence of the non-highest first target candidate frame is obtained, and the second target candidate frame is obtained.
步骤S105,对所有所述第二目标候选框两两计算重合度,并获取重合度大于第二预设阈值的两个所述第二目标候选框中一个,得到第三目标候选框。Step S105: Calculate the coincidence degree for all the second target candidate frames, and acquire one of the two second target candidate frames whose coincidence degree is greater than a second preset threshold to obtain a third target candidate frame.
具体的,在得到所有的第二目标候选框后,得到的第二候选框所属类别不同,对所有第二目标候选框计算每两个目标候选框之间的重合度,并将所有重合度大于第二预设阈值的两个第二目标候选框中剔除一个,从而可以防止重复识别,提高识别的准确性,将所有重合度大于第二预设阈值的两个第二目标候选框中剔除一个后的所有第二目标候选框称为第三目标候选框。Specifically, after obtaining all the second target candidate frames, the obtained second candidate frames belong to different categories, calculate the coincidence degree between each two target candidate frames for all the second target candidate frames, and calculate all the coincidence degrees greater than One of the two second target candidate frames of the second preset threshold is eliminated, so as to prevent repeated recognition and improve the accuracy of recognition, and one of the two second target candidate frames whose coincidence degree is greater than the second preset threshold is eliminated. All subsequent second target candidate frames are called third target candidate frames.
在一个实施例中,如图3所示,所述对所有所述第二目标候选框两两计算重合度,并获取重合度大于第二预设阈值的两个所述第二目标候选框中一个,得到第三目标候选框,包括步骤S1051至步骤S1052:In one embodiment, as shown in FIG. 3 , the coincidence degree is calculated for all the second target candidate frames in pairs, and two second target candidate frames whose coincidence degree is greater than a second preset threshold are acquired One, obtain the third target candidate frame, including steps S1051 to S1052:
步骤S1051,对所有所述第二目标候选框两两计算交并比,得到所有所述第二目标候选框两两之间的重合度。Step S1051: Calculate the intersection ratio of all the second target candidate frames pairwise to obtain the degree of coincidence between all the second target candidate frames.
具体的,交并比(Intersection over Union,IoU)可表示两个候选框区域之间的交叠程度,对所有第二候选框中的两两之间计算交并比,从而得到所有第二候选目标框两两之间的重合度。Specifically, the intersection over union (IoU) can represent the degree of overlap between two candidate frame regions, and the intersection ratio is calculated between all the second candidate frames to obtain all the second candidate frames. The degree of coincidence between the target boxes.
步骤S1052,剔除重合度大于第二预设阈值的两个所述第二目标候选框中的其中一个,得到第三目标候选框。Step S1052: Eliminate one of the two second target candidate frames with a degree of coincidence greater than a second preset threshold to obtain a third target candidate frame.
具体的,将所有重合度大于第二预设阈值的两个第二目标候选框中其中一个剔除,将剔除掉重合度大于第二阈值的两个第二目标候选框其中一个后的所有第二候选框称为第三目标候选框。具体可通过预先存储的目标物体的常见度剔除掉重合度大于第二阈值的两个第二目标候选框中对应物体常见度低的第二目标候选框,或者将重合度大于第二阈值的两个第二目标候选框对应置信度小的第二目标候选框剔除。Specifically, one of the two second target candidate frames with the coincidence degree greater than the second preset threshold is eliminated, and all the second target candidate frames after one of the two second target candidate frames with the coincidence degree greater than the second threshold value are eliminated. The candidate frame is called the third target candidate frame. Specifically, the pre-stored common degree of the target object can be used to remove the two second target candidate frames whose coincidence degree is greater than the second threshold and the second target candidate frame with low object common degree, or the two second target candidate frames whose coincidence degree is greater than the second threshold can be eliminated. The second target candidate frame corresponding to the second target candidate frame with low confidence is eliminated.
在一个实施例中,所述剔除重合度大于第二预设阈值的两个所述第二目标候选框中的其中一个,得到第三目标候选框,包括:获取重合度大于第二预设阈值的两个所述第二目标候选框中物体的类别;剔除重合度大于第二预设阈值的两个所述第二目标候选框中物体的类别的常见度满足第二预设条件的第二目标候选框,得到第三目标候选框。剔除重合度大于第二预设阈值的两个所述第二目标候选框中物体的类别的常见度满足第二预设条件可以是:剔除重合度大于第二预设阈值的两个所述第二目标候选框中物体的类别的常见度小的第二目标候选框。In one embodiment, removing one of the two second target candidate frames with a degree of coincidence greater than a second preset threshold to obtain a third candidate frame of the target includes: obtaining a degree of coincidence greater than the second preset threshold The categories of the objects in the two second target candidate frames; the common degree of the categories of the objects in the two second target candidate frames whose coincidence degree is greater than the second preset threshold value satisfies the second preset condition. The target candidate frame is obtained, and the third target candidate frame is obtained. Eliminating the two objects in the second target candidate frame with a degree of coincidence greater than a second preset threshold whose commonness of the categories of objects in the second target candidate frame satisfies the second preset condition may be: Eliminating two of the objects in the second target candidate frame whose degree of coincidence is greater than the second preset threshold may be: The second target candidate frame of the category of the object in the two-target candidate frame is less common.
步骤S106,当得到多个所述第三目标候选框时,获取中心位置与所述目标图像的预设中心位置之间的距离最短的第三目标候选框中物体的类别,得到所述目标图像中物体的识别结果并输出。Step S106, when a plurality of the third target candidate frames are obtained, obtain the category of the object in the third target candidate frame with the shortest distance between the center position and the preset center position of the target image, and obtain the target image The recognition results of the objects in and output.
具体的,经过上述步骤得到的第三目标候选框有多个时,从所述多个第三目标候选框中选择一个作为对应要识别物体的候选框,可获取所述第三候选框的中心位置在所述目标图像中的坐标,再获取所述目标图像的预设中心位置的坐标,将第三目标候选框中心离目 标图像中心的距离,按距离值由小到大排序,输出离目标图像中心最近的物体类别信息及位置信息。Specifically, when there are multiple third target candidate frames obtained through the above steps, select one of the multiple third target candidate frames as the candidate frame corresponding to the object to be recognized, and the center of the third candidate frame can be obtained. The coordinates of the position in the target image, and then the coordinates of the preset center position of the target image are obtained, the distance between the center of the third target candidate frame and the center of the target image is sorted according to the distance value from small to large, and the distance from the target is output. The closest object category information and position information in the image center.
在一个实施例中,所述对所有所述第二目标候选框两两计算重合度,并获取重合度大于第二预设阈值的两个所述第二目标候选框中一个,得到第三目标候选框之后,还包括:当得到一个所述第三目标候选框时,获取一个所述第三目标候选框中物体的类别,得到所述目标图像中物体的识别结果并输出。In one embodiment, the degree of coincidence is calculated for all the second target candidate frames, and one of the two candidate frames of the second target whose degree of coincidence is greater than a second preset threshold is obtained to obtain a third target After the candidate frame, the method further includes: when a third target candidate frame is obtained, obtaining a category of an object in the third target candidate frame, obtaining and outputting a recognition result of the object in the target image.
本申请实施例可从所有所述候选框中获取物体的类别的置信度大于第一预设阈值的候选框,可减少把背景误识别成物体的误判识别,且对同类别的第一目标候选框进行过滤,对不同类别的第二目标候选框,根据重合度进行过滤,得到第三目标候选框,再获取中心位置与所述目标图像的预设中心位置之间的距离最短的第三目标候选框中物体的类别,从而得到识别结果,可提高对目标物体识别的准确性和稳定性。In this embodiment of the present application, candidate frames with the confidence of the category of the object greater than the first preset threshold can be obtained from all the candidate frames, which can reduce the misjudgment and recognition of the background being mistakenly identified as an object, and the first target of the same category can be reduced. The candidate frame is filtered, and the second target candidate frames of different categories are filtered according to the degree of coincidence to obtain the third target candidate frame, and then the third target candidate frame with the shortest distance between the center position and the preset center position of the target image is obtained. The category of the object in the target candidate frame is obtained to obtain the recognition result, which can improve the accuracy and stability of the target object recognition.
本申请实施例还提供一种物体识别装置,用于执行上述物体识别方法实施例中的步骤。物体识别装置可以是终端设备中的虚拟装置(virtual appliance),由终端设备的处理器运行,也可以是终端设备本身。The embodiments of the present application further provide an object recognition apparatus, which is configured to perform the steps in the above-mentioned embodiments of the object recognition method. The object identification device may be a virtual appliance (virtual appliance) in the terminal device, run by the processor of the terminal device, or may be the terminal device itself.
如图4所示,本申请实施例提供的物体识别装置400包括:As shown in FIG. 4 , the object recognition apparatus 400 provided by the embodiment of the present application includes:
检测模块401,用于获取目标图像,检测所述目标图像的图像质量;A detection module 401, configured to acquire a target image and detect the image quality of the target image;
识别模块402,用于当所述目标图像的图像质量满足第一预设条件时,对所述目标图像进行物体识别,得到至少一个候选框、每个所述候选框中物体的类别以及每个所述候选框中物体的类别的置信度;The identification module 402 is configured to perform object identification on the target image when the image quality of the target image satisfies the first preset condition, and obtain at least one candidate frame, a category of objects in each candidate frame, and each the confidence of the category of the object in the candidate frame;
第一获取模块403,用于从所有所述候选框中获取物体的类别的置信度大于第一预设阈值的候选框,得到第一目标候选框;A first obtaining module 403, configured to obtain a candidate frame whose confidence of the category of the object is greater than a first preset threshold from all the candidate frames, to obtain a first target candidate frame;
过滤模块404,用于对物体的类别为同一类别的第一目标候选框进行过滤,得到第二目标候选框;其中,得到的每两个第二目标候选框之间的物体的类别不同;The filtering module 404 is used to filter the first target candidate frame whose category of the object is the same category, to obtain a second target candidate frame; wherein, the categories of objects between each two obtained second target candidate frames are different;
第二获取模块405,用于对所有所述第二目标候选框两两计算重合度,并获取重合度大于第二预设阈值的两个所述第二目标候选框中一个,得到第三目标候选框;The second obtaining module 405 is configured to calculate the coincidence degree of all the second target candidate frames in pairs, and obtain one of the two second target candidate frames whose coincidence degree is greater than a second preset threshold, and obtain a third target candidate frame;
第三获取模块406,用于当得到多个所述第三目标候选框时,获取中心位置与所述目标图像的预设中心位置之间的距离最短的第三目标候选框中物体的类别,得到所述目标图像中物体的识别结果并输出。The third obtaining module 406 is configured to obtain the category of the object in the third target candidate frame with the shortest distance between the center position and the preset center position of the target image when a plurality of the third target candidate frames are obtained, The recognition result of the object in the target image is obtained and output.
在一个实施例中,所述第二获取模块405包括:In one embodiment, the second obtaining module 405 includes:
计算单元,用于对所有所述第二目标候选框两两计算交并比,得到所有所述第二目标候选框两两之间的重合度;a computing unit, configured to calculate the intersection ratio for all the second target candidate frames, to obtain the degree of coincidence between all the second target candidate frames;
剔除单元,用于剔除重合度大于第二预设阈值的两个所述第二目标候选框中的其中一个,得到第三目标候选框。A culling unit, configured to cull one of the two second target candidate frames with a degree of coincidence greater than a second preset threshold, to obtain a third target candidate frame.
在一个实施例中,所述剔除单元包括:In one embodiment, the culling unit includes:
获取子单元,用于获取重合度大于第二预设阈值的两个所述第二目标候选框中物体的类别;an acquisition subunit, configured to acquire the categories of objects in the two second target candidate frames whose coincidence degree is greater than a second preset threshold;
剔除子单元,用于剔除重合度大于第二预设阈值的两个所述第二目标候选框中物体的类别的常见度满足第二预设条件的第二目标候选框,得到第三目标候选框。A culling subunit, used for culling the second target candidate frame whose category of objects in the two second target candidate frames whose coincidence degree is greater than the second preset threshold value satisfies the second preset condition, to obtain a third target candidate frame frame.
在一个实施例中,所述过滤模块404具体用于:In one embodiment, the filtering module 404 is specifically configured to:
过滤物体的类别为同一类别的第一目标候选框中物体的类别的置信度非最高的第一目标候选框,得到第二目标候选框。Filter the first target candidate frame whose category of the object is the same category in the first target candidate frame of which the confidence of the category of the object is not the highest, and obtain the second target candidate frame.
在一个实施例中,所述识别模块402具体用于:In one embodiment, the identifying module 402 is specifically configured to:
在所述目标图像的图像质量满足所述第一预设条件时,将所述目标图像输入至已训练的神经网络模型进行物体识别,得到至少一个候选框、每个所述候选框中物体的类别以及每个所述候选框中物体的类别的置信度。When the image quality of the target image satisfies the first preset condition, input the target image into the trained neural network model for object recognition, and obtain at least one candidate frame, the object in each candidate frame The category and the confidence of the category of the object in each of the candidate boxes.
在一个实施例中,所述物体识别装置400包括:In one embodiment, the object recognition device 400 includes:
第四获取模块,用于在所述第二获取模块触发之后,当得到一个所述第三目标候选框时,获取一个所述第三目标候选框中物体的类别,得到所述目标图像中物体的识别结果并输出。a fourth acquisition module, configured to acquire a category of an object in the third target candidate frame when a third target candidate frame is obtained after the second acquisition module is triggered, and obtain an object in the target image recognition results and output.
在一个实施例中,所述检测模块401还包括:In one embodiment, the detection module 401 further includes:
检测单元,用于检测所述目标图像的清晰度、色度和亮度是否在各自对应的预设正常范围内;a detection unit, configured to detect whether the clarity, chromaticity and brightness of the target image are within the respective preset normal ranges;
判定单元,用于当所述清晰度、所述色度和所述亮度均在各自对应的预设正常范围内时,判定所述目标图像满足第一预设条件。A determination unit, configured to determine that the target image satisfies a first preset condition when the sharpness, the chromaticity and the brightness are all within their corresponding preset normal ranges.
在一个实施例中,所述物体识别装置400还包括:In one embodiment, the object recognition apparatus 400 further includes:
处理模块,用于当所述亮度不在预设正常亮度范围且在预设处理亮度范围内时,通过高动态范围成像算法对所述目标图像进行处理,以使处理后的所述目标图像满足第一预设条件。The processing module is configured to process the target image through a high dynamic range imaging algorithm when the brightness is not within the preset normal brightness range and within the preset processing brightness range, so that the processed target image satisfies the first a preset condition.
本申请实施例从所有所述候选框中获取物体的类别的置信度大于第一预设阈值的候选框,可减少把背景误识别成物体的误判识别,且对同类别的第一目标候选框进行过滤,对不同类别的第二目标候选框,根据重合度进行过滤,得到第三目标候选框,再获取中心位置与所述目标图像的预设中心位置之间的距离最短的第三目标候选框中物体的类别,从而得到识别结果,可提高对目标物体识别的准确性和稳定性。In this embodiment of the present application, the candidate frames with the confidence of the category of the object greater than the first preset threshold are obtained from all the candidate frames, which can reduce the false identification of the background as an object, and the first target candidate of the same category can be reduced. Filter the second target candidate frames of different categories according to the degree of coincidence to obtain the third target candidate frame, and then obtain the third target with the shortest distance between the center position and the preset center position of the target image. The category of the object in the candidate frame is obtained to obtain the recognition result, which can improve the accuracy and stability of the target object recognition.
如图5所示,本发明的一个实施例还提供一种终端设备500包括:处理器501,存储器502以及存储在所述存储器502中并可在所述处理器501上运行的计算机程序503,例如物体识别程序。所述处理器501执行所述计算机程序503时实现上述各个物体识别方法实施例中的步骤。所述处理器501执行所述计算机程序503时实现上述各装置实施例中各模块的功能,例如图4所示模块401至406的功能。As shown in FIG. 5, an embodiment of the present invention further provides a terminal device 500 including: a processor 501, a memory 502, and a computer program 503 stored in the memory 502 and running on the processor 501, such as object recognition programs. When the processor 501 executes the computer program 503, the steps in each of the above embodiments of the object recognition method are implemented. When the processor 501 executes the computer program 503, the functions of the modules in the above-mentioned apparatus embodiments, for example, the functions of the modules 401 to 406 shown in FIG. 4 are realized.
示例性的,所述计算机程序503可以被分割成一个或多个模块,所述一个或者多个模块被存储在所述存储器502中,并由所述处理器501执行,以完成本发明。所述一个或多个模块可以是能够完成特定功能的一系列计算机程序指令段,该指令段用于描述所述计算机程序503在所述终端设备500中的执行过程。例如,所述计算机程序503可以被分割成检测模块,识别模块,第一获取模块,过滤模块,第二获取模块,第三获取模块,各模块具体功能在上述实施例中已有描述,此处不再赘述。Exemplarily, the computer program 503 may be divided into one or more modules, and the one or more modules are stored in the memory 502 and executed by the processor 501 to complete the present invention. The one or more modules may be a series of computer program instruction segments capable of accomplishing specific functions, and the instruction segments are used to describe the execution process of the computer program 503 in the terminal device 500 . For example, the computer program 503 can be divided into a detection module, an identification module, a first acquisition module, a filter module, a second acquisition module, and a third acquisition module. The specific functions of each module have been described in the above embodiments, and here No longer.
所述终端设备500可以是机器人,或者桌上型计算机、笔记本、掌上电脑及云端服务器等计算设备。所述终端设备可包括,但不仅限于,处理器501,存储器502。本领域技术人员可以理解,图5仅仅是终端设备500的示例,并不构成对终端设备500的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件,例如所述终端设备还可以包括输入输出设备、网络接入设备、总线等。The terminal device 500 may be a robot, or a computing device such as a desktop computer, a notebook computer, a palmtop computer, and a cloud server. The terminal device may include, but is not limited to, the processor 501 and the memory 502 . Those skilled in the art can understand that FIG. 5 is only an example of the terminal device 500, and does not constitute a limitation on the terminal device 500. It may include more or less components than the one shown, or combine some components, or different components For example, the terminal device may further include an input and output device, a network access device, a bus, and the like.
所称处理器501可以是中央处理单元(Central Processing Unit,CPU),还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。The so-called processor 501 may be a central processing unit (Central Processing Unit, CPU), or other general-purpose processors, digital signal processors (Digital Signal Processors, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
所述存储器502可以是所述终端设备500的内部存储单元,例如终端设备500的硬盘或内存。所述存储器502也可以是所述终端设备500的外部存储设备,例如所述终端设备500上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。进一步地,所述存储器502还可以既包括所述终端设备500的内部存储单元也包括外部存储设备。所述存储器502用于存储所述计算机程序以及所述终端设备所需的其他程序和数据。所述存储器502还可以用于暂时地存储已经输出或者将要输出的数据。The memory 502 may be an internal storage unit of the terminal device 500 , such as a hard disk or a memory of the terminal device 500 . The memory 502 may also be an external storage device of the terminal device 500, such as a plug-in hard disk, a smart memory card (Smart Media Card, SMC), a secure digital (Secure Digital, SD) equipped on the terminal device 500 card, Flash Card, etc. Further, the memory 502 may also include both an internal storage unit of the terminal device 500 and an external storage device. The memory 502 is used for storing the computer program and other programs and data required by the terminal device. The memory 502 can also be used to temporarily store data that has been output or is to be output.
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,仅以上述各功能单 元、模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能单元、模块完成,即将所述装置的内部结构划分成不同的功能单元或模块,以完成以上描述的全部或者部分功能。实施例中的各功能单元、模块可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中,上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。另外,各功能单元、模块的具体名称也只是为了便于相互区分,并不用于限制本申请的保护范围。上述***中单元、模块的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。Those skilled in the art can clearly understand that, for the convenience and simplicity of description, only the division of the above-mentioned functional units and modules is used as an example. Module completion, that is, dividing the internal structure of the device into different functional units or modules to complete all or part of the functions described above. Each functional unit and module in the embodiment may be integrated in one processing unit, or each unit may exist physically alone, or two or more units may be integrated in one unit, and the above-mentioned integrated units may adopt hardware. It can also be realized in the form of software functional units. In addition, the specific names of the functional units and modules are only for the convenience of distinguishing from each other, and are not used to limit the protection scope of the present application. For the specific working processes of the units and modules in the above-mentioned system, reference may be made to the corresponding processes in the foregoing method embodiments, which will not be repeated here.
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述或记载的部分,可以参见其它实施例的相关描述。In the foregoing embodiments, the description of each embodiment has its own emphasis. For parts that are not described or described in detail in a certain embodiment, reference may be made to the relevant descriptions of other embodiments.
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本发明的范围。Those of ordinary skill in the art can realize that the units and algorithm steps of each example described in conjunction with the embodiments disclosed herein can be implemented in electronic hardware, or a combination of computer software and electronic hardware. Whether these functions are performed in hardware or software depends on the specific application and design constraints of the technical solution. Skilled artisans may implement the described functionality using different methods for each particular application, but such implementations should not be considered beyond the scope of the present invention.
在本发明所提供的实施例中,应该理解到,所揭露的装置/终端设备和方法,可以通过其它的方式实现。例如,以上所描述的装置/终端设备实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个***,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通讯连接可以是通过一些接口,装置或单元的间接耦合或通讯连接,可以是电性,机械或其它的形式。In the embodiments provided by the present invention, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other manners. For example, the apparatus/terminal device embodiments described above are only illustrative. For example, the division of the modules or units is only a logical function division. In actual implementation, there may be other division methods, such as multiple units. Or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented. On the other hand, the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。In addition, each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit. The above-mentioned integrated units may be implemented in the form of hardware, or may be implemented in the form of software functional units.
所述集成的模块如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明实现上述实施例方法中的全部或部分流程,也可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一计算机可读存储介质中,该计算机程序在被处理器执行时,可实现上述各个方法实施例的步骤。其中,所述计算机程序包括计算机程序代码,所述计算机程序代码可以为源代码形式、对象代码形式、可执行文件或某些中间形式等。所述计算机可读介质可以包括:能够携带所述计算机程序代码的任何实体或装置、记录介质、U盘、移动硬盘、磁碟、光盘、计算机存储器、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、电载波信号、电信信号以及软件分发介质等。需要说明的是,所述计算机可读介质包含的内容可以根据司法管辖区内立法和专利实践的要求进行适当的增减,例如在某些司法管辖区,根据立法和专利实践,计算机可读介质不包括电载波信号和电信信号。The integrated modules, if implemented in the form of software functional units and sold or used as independent products, may be stored in a computer-readable storage medium. Based on this understanding, the present invention can implement all or part of the processes in the methods of the above embodiments, and can also be completed by instructing relevant hardware through a computer program, and the computer program can be stored in a computer-readable storage medium. When the program is executed by the processor, the steps of the foregoing method embodiments can be implemented. Wherein, the computer program includes computer program code, and the computer program code may be in the form of source code, object code, executable file or some intermediate form, and the like. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, U disk, removable hard disk, magnetic disk, optical disk, computer memory, read-only memory (ROM, Read-Only Memory) , Random Access Memory (RAM, Random Access Memory), electric carrier signal, telecommunication signal and software distribution medium, etc. It should be noted that the content contained in the computer-readable media may be appropriately increased or decreased according to the requirements of legislation and patent practice in the jurisdiction, for example, in some jurisdictions, according to legislation and patent practice, the computer-readable media Electric carrier signals and telecommunication signals are not included.
以上所述实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的精神和范围,均应包含在本发明的保护范围之内。The above-mentioned embodiments are only used to illustrate the technical solutions of the present invention, but not to limit them; although the present invention has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art should understand that: it is still possible to implement the foregoing implementations. The technical solutions described in the examples are modified, or some technical features thereof are equivalently replaced; and these modifications or replacements do not make the essence of the corresponding technical solutions deviate from the spirit and scope of the technical solutions of the embodiments of the present invention, and should be included in the within the protection scope of the present invention.

Claims (10)

  1. 一种物体识别方法,其特征在于,包括:An object recognition method, characterized in that it includes:
    获取目标图像,检测所述目标图像的图像质量;acquiring a target image, and detecting the image quality of the target image;
    当所述目标图像的图像质量满足第一预设条件时,对所述目标图像进行物体识别,得到至少一个候选框、每个所述候选框中物体的类别以及每个所述候选框中物体的类别的置信度;When the image quality of the target image satisfies the first preset condition, perform object recognition on the target image to obtain at least one candidate frame, the category of the object in each candidate frame, and the object in each candidate frame the confidence level of the category;
    从所有所述候选框中获取物体的类别的置信度大于第一预设阈值的候选框,得到第一目标候选框;Obtain a candidate frame whose confidence of the category of the object is greater than the first preset threshold from all the candidate frames, and obtain the first target candidate frame;
    对物体的类别为同一类别的第一目标候选框进行过滤,得到第二目标候选框;其中,得到的每两个第二目标候选框中的物体的类别不同;Filtering the first target candidate frame whose object category is the same category to obtain a second target candidate frame; wherein, the categories of the objects in each of the two obtained second target candidate frames are different;
    对所有所述第二目标候选框两两计算重合度,并获取重合度大于第二预设阈值的两个所述第二目标候选框中一个,得到第三目标候选框;Calculate the coincidence degree for all the second target candidate frames, and obtain one of the two second target candidate frames whose coincidence degree is greater than the second preset threshold, to obtain a third target candidate frame;
    当得到多个所述第三目标候选框时,获取中心位置与所述目标图像的预设中心位置之间的距离最短的第三目标候选框中物体的类别,得到所述目标图像中物体的识别结果并输出。When a plurality of the third target candidate frames are obtained, the category of the object in the third target candidate frame with the shortest distance between the center position and the preset center position of the target image is obtained, and the object in the target image is obtained. Identify the results and output.
  2. 根据权利要求1所述的物体识别方法,其特征在于,所述对所有所述第二目标候选框两两计算重合度,并获取重合度大于第二预设阈值的两个所述第二目标候选框中一个,得到第三目标候选框,包括:The object recognition method according to claim 1, wherein, calculating the coincidence degree of all the second target candidate frames in pairs, and acquiring two second targets whose coincidence degree is greater than a second preset threshold One candidate frame is obtained, and the third target candidate frame is obtained, including:
    对所有所述第二目标候选框两两计算交并比,得到所有所述第二目标候选框两两之间的重合度;Calculate the intersection ratio for all the second target candidate frames, and obtain the degree of coincidence between all the second target candidate frames;
    剔除重合度大于第二预设阈值的两个所述第二目标候选框中的其中一个,得到第三目标候选框。Eliminate one of the two second target candidate frames with a degree of coincidence greater than a second preset threshold to obtain a third target candidate frame.
  3. 根据权利要求2所述的物体识别方法,其特征在于,所述剔除重合度大于第二预设阈值的两个所述第二目标候选框中的其中一个,得到第三目标候选框,包括:The object recognition method according to claim 2, wherein the removing one of the two second target candidate frames with a degree of coincidence greater than a second preset threshold to obtain a third target candidate frame, comprising:
    获取重合度大于第二预设阈值的两个所述第二目标候选框中物体的类别;Obtaining the categories of objects in the two second target candidate frames whose coincidence degree is greater than a second preset threshold;
    剔除重合度大于第二预设阈值的两个所述第二目标候选框中物体的类别的常见度满足第二预设条件的第二目标候选框,得到第三目标候选框。A third target candidate frame is obtained by rejecting the second target candidate frame whose commonness of the categories of the objects in the two second target candidate frames satisfies a second predetermined condition with a degree of coincidence greater than a second preset threshold.
  4. 根据权利要求1所述的物体识别方法,其特征在于,所述对物体的类别为同一类别的第一目标候选框进行过滤,得到第二目标候选框,包括:The object recognition method according to claim 1, wherein the filtering of the first target candidate frame of the same category of objects to obtain the second target candidate frame, comprising:
    过滤物体的类别为同一类别的第一目标候选框中物体的类别的置信度非最高的第一目标候选框,得到第二目标候选框。Filter the first target candidate frame whose category of the object is the same category in the first target candidate frame of the object whose class confidence is not the highest, and obtain the second target candidate frame.
  5. 根据权利要求1所述的物体识别方法,其特征在于,所述当所述目标图像的图像质量满足第一预设条件时,对所述目标图像进行物体识别,得到至少一个候选框、每个所述候选框中物体的类别以及每个所述候选框中物体的类别的置信度,包括:The object recognition method according to claim 1, wherein when the image quality of the target image satisfies a first preset condition, performing object recognition on the target image to obtain at least one candidate frame, each The category of the object in the candidate frame and the confidence level of the category of each object in the candidate frame, including:
    在所述目标图像的图像质量满足所述第一预设条件时,将所述目标图像输入至已训练的神经网络模型进行物体识别,得到至少一个候选框、每个所述候选框中物体的类别以及每个所述候选框中物体的类别的置信度。When the image quality of the target image satisfies the first preset condition, input the target image into the trained neural network model for object recognition, and obtain at least one candidate frame, the object in each candidate frame The category and the confidence of the category of the object in each of the candidate boxes.
  6. 根据权利要求1所述的物体识别方法,其特征在于,所述对所有所述第二目标候选框两两计算重合度,并获取重合度大于第二预设阈值的两个所述第二目标候选框中一个,得到第三目标候选框之后,还包括:The object recognition method according to claim 1, wherein, calculating the coincidence degree of all the second target candidate frames in pairs, and acquiring two second targets whose coincidence degree is greater than a second preset threshold There is one candidate frame, and after obtaining the third target candidate frame, it also includes:
    当得到一个所述第三目标候选框时,获取一个所述第三目标候选框中物体的类别,得到所述目标图像中物体的识别结果并输出。When a third target candidate frame is obtained, a category of an object in the third target candidate frame is obtained, and a recognition result of the object in the target image is obtained and output.
  7. 根据权利要求1至6任一项所述的物体识别方法,其特征在于,所述检测所述目标图像的图像质量,包括:The object recognition method according to any one of claims 1 to 6, wherein the detecting the image quality of the target image comprises:
    检测所述目标图像的清晰度、色度和亮度是否在各自对应的预设正常范围内;Detecting whether the clarity, chromaticity and brightness of the target image are within the respective preset normal ranges;
    当所述清晰度、所述色度和所述亮度均在各自对应的预设正常范围内时,判定所述目 标图像满足第一预设条件。When the definition, the chrominance and the brightness are all within their corresponding preset normal ranges, it is determined that the target image satisfies the first preset condition.
  8. 根据权利要求7所述的物体识别方法,其特征在于,所述检测所述目标图像的清晰度、色度和亮度是否在各自对应的预设正常范围内之后,还包括:The object recognition method according to claim 7, wherein after detecting whether the clarity, chromaticity and brightness of the target image are within respective preset normal ranges, the method further comprises:
    当所述亮度不在预设正常亮度范围且在预设处理亮度范围内时,通过高动态范围成像算法对所述目标图像进行处理,以使处理后的所述目标图像满足第一预设条件。When the brightness is not within the preset normal brightness range and within the preset processing brightness range, the target image is processed by a high dynamic range imaging algorithm, so that the processed target image satisfies the first preset condition.
  9. 一种物体识别装置,其特征在于,包括:An object recognition device, characterized in that it includes:
    检测模块,用于获取目标图像,检测所述目标图像的图像质量;a detection module for acquiring a target image and detecting the image quality of the target image;
    识别模块,用于当所述目标图像的图像质量满足第一预设条件时,对所述目标图像进行物体识别,得到至少一个候选框、每个所述候选框中物体的类别以及每个所述候选框中物体的类别的置信度;The recognition module is configured to perform object recognition on the target image when the image quality of the target image satisfies the first preset condition, and obtain at least one candidate frame, the category of the object in each candidate frame, and each selected frame. The confidence of the category of the object in the candidate box;
    第一获取模块,用于从所有所述候选框中获取物体的类别的置信度大于第一预设阈值的候选框,得到第一目标候选框;a first obtaining module, configured to obtain a candidate frame whose confidence of the category of the object is greater than a first preset threshold from all the candidate frames, to obtain a first target candidate frame;
    过滤模块,用于对物体的类别为同一类别的第一目标候选框进行过滤,得到第二目标候选框;其中,得到的每两个第二目标候选框之间的物体的类别不同;The filtering module is used to filter the first target candidate frame whose object category is the same category to obtain the second target candidate frame; wherein, the categories of objects between each two obtained second target candidate frames are different;
    第二获取模块,用于对所有所述第二目标候选框两两计算重合度,并获取重合度大于第二预设阈值的两个所述第二目标候选框中一个,得到第三目标候选框;The second obtaining module is configured to calculate the coincidence degree for all the second target candidate frames, and obtain one of the two second target candidate frames whose coincidence degree is greater than the second preset threshold, and obtain a third target candidate frame;
    第三获取模块,用于当得到多个所述第三目标候选框时,获取中心位置与所述目标图像的预设中心位置之间的距离最短的第三目标候选框中物体的类别,得到所述目标图像中物体的识别结果并输出。The third obtaining module is configured to obtain the category of the object in the third target candidate frame with the shortest distance between the center position and the preset center position of the target image when a plurality of the third target candidate frames are obtained, and obtain The recognition result of the object in the target image is output.
  10. 一种终端设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现如权利要求1至8任一项所述的方法。A terminal device, comprising a memory, a processor, and a computer program stored in the memory and running on the processor, characterized in that, when the processor executes the computer program, the process according to claim 1 to 8. The method of any one.
PCT/CN2020/140419 2020-10-21 2020-12-28 Object recognition method and apparatus, and terminal device and storage medium WO2022082999A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011130384.0A CN112348778B (en) 2020-10-21 2020-10-21 Object identification method, device, terminal equipment and storage medium
CN202011130384.0 2020-10-21

Publications (1)

Publication Number Publication Date
WO2022082999A1 true WO2022082999A1 (en) 2022-04-28

Family

ID=74359437

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/140419 WO2022082999A1 (en) 2020-10-21 2020-12-28 Object recognition method and apparatus, and terminal device and storage medium

Country Status (2)

Country Link
CN (1) CN112348778B (en)
WO (1) WO2022082999A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114677573A (en) * 2022-05-30 2022-06-28 上海捷勃特机器人有限公司 Visual classification method, system, device and computer readable medium

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113158869A (en) * 2021-04-15 2021-07-23 深圳市优必选科技股份有限公司 Image recognition method and device, terminal equipment and computer readable storage medium
CN113657333B (en) * 2021-08-23 2024-01-12 深圳科卫机器人科技有限公司 Guard line identification method, guard line identification device, computer equipment and storage medium
CN116543189B (en) * 2023-06-29 2023-09-26 天津所托瑞安汽车科技有限公司 Target detection method, device, equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130114891A1 (en) * 2011-11-07 2013-05-09 Tandent Vision Science, Inc. Post processing for improved generation of intrinsic images
CN108776819A (en) * 2018-06-05 2018-11-09 Oppo广东移动通信有限公司 A kind of target identification method, mobile terminal and computer readable storage medium
CN110852258A (en) * 2019-11-08 2020-02-28 北京字节跳动网络技术有限公司 Object detection method, device, equipment and storage medium
CN111222419A (en) * 2019-12-24 2020-06-02 深圳市优必选科技股份有限公司 Object identification method, robot and computer readable storage medium
CN111368698A (en) * 2020-02-28 2020-07-03 Oppo广东移动通信有限公司 Subject recognition method, subject recognition device, electronic device, and medium

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106557778B (en) * 2016-06-17 2020-02-07 北京市商汤科技开发有限公司 General object detection method and device, data processing device and terminal equipment
CN106778835B (en) * 2016-11-29 2020-03-24 武汉大学 Remote sensing image airport target identification method fusing scene information and depth features
CN109377508B (en) * 2018-09-26 2020-12-18 北京字节跳动网络技术有限公司 Image processing method and device
CN109977943B (en) * 2019-02-14 2024-05-07 平安科技(深圳)有限公司 Image target recognition method, system and storage medium based on YOLO
CN110033424A (en) * 2019-04-18 2019-07-19 北京迈格威科技有限公司 Method, apparatus, electronic equipment and the computer readable storage medium of image procossing
CN111047879A (en) * 2019-12-24 2020-04-21 苏州奥易克斯汽车电子有限公司 Vehicle overspeed detection method
CN111339839B (en) * 2020-02-10 2023-10-03 广州众聚智能科技有限公司 Intensive target detection metering method
CN111507204A (en) * 2020-03-27 2020-08-07 北京百度网讯科技有限公司 Method and device for detecting countdown signal lamp, electronic equipment and storage medium
CN111783863A (en) * 2020-06-23 2020-10-16 腾讯科技(深圳)有限公司 Image processing method, device, equipment and computer readable storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130114891A1 (en) * 2011-11-07 2013-05-09 Tandent Vision Science, Inc. Post processing for improved generation of intrinsic images
CN108776819A (en) * 2018-06-05 2018-11-09 Oppo广东移动通信有限公司 A kind of target identification method, mobile terminal and computer readable storage medium
CN110852258A (en) * 2019-11-08 2020-02-28 北京字节跳动网络技术有限公司 Object detection method, device, equipment and storage medium
CN111222419A (en) * 2019-12-24 2020-06-02 深圳市优必选科技股份有限公司 Object identification method, robot and computer readable storage medium
CN111368698A (en) * 2020-02-28 2020-07-03 Oppo广东移动通信有限公司 Subject recognition method, subject recognition device, electronic device, and medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114677573A (en) * 2022-05-30 2022-06-28 上海捷勃特机器人有限公司 Visual classification method, system, device and computer readable medium
CN114677573B (en) * 2022-05-30 2022-08-26 上海捷勃特机器人有限公司 Visual classification method, system, device and computer readable medium

Also Published As

Publication number Publication date
CN112348778B (en) 2023-10-27
CN112348778A (en) 2021-02-09

Similar Documents

Publication Publication Date Title
WO2022082999A1 (en) Object recognition method and apparatus, and terminal device and storage medium
US11398084B2 (en) Method, apparatus and application system for extracting a target feature
US8498444B2 (en) Blob representation in video processing
CN116018616A (en) Maintaining a fixed size of a target object in a frame
US11600008B2 (en) Human-tracking methods, systems, and storage media
US8472669B2 (en) Object localization using tracked object trajectories
US20220084304A1 (en) Method and electronic device for image processing
US11455831B2 (en) Method and apparatus for face classification
JP6309549B2 (en) Deformable expression detector
CN110335216B (en) Image processing method, image processing apparatus, terminal device, and readable storage medium
CN109840883B (en) Method and device for training object recognition neural network and computing equipment
EP3798975B1 (en) Method and apparatus for detecting subject, electronic device, and computer readable storage medium
US20210064851A1 (en) Age recognition method, storage medium and electronic device
CN111191582B (en) Three-dimensional target detection method, detection device, terminal device and computer readable storage medium
KR20210012012A (en) Object tracking methods and apparatuses, electronic devices and storage media
CN113673584A (en) Image detection method and related device
US11727784B2 (en) Mask wearing status alarming method, mobile device and computer readable storage medium
CN112614110B (en) Method and device for evaluating image quality and terminal equipment
CN112069887A (en) Face recognition method, face recognition device, terminal equipment and storage medium
US11709914B2 (en) Face recognition method, terminal device using the same, and computer readable storage medium
CN113158773B (en) Training method and training device for living body detection model
WO2019095469A1 (en) Method and system for face detection
CN111507252A (en) Human body falling detection device and method, electronic terminal and storage medium
Pandey et al. Implementation of 5-block convolutional neural network (cnn) for saliency improvement on flying object detection in videos
CN111160363B (en) Method and device for generating feature descriptors, readable storage medium and terminal equipment

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20958572

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20958572

Country of ref document: EP

Kind code of ref document: A1