WO2020134010A1 - Apprentissage d'un modèle d'extraction de point clé d'image et extraction de point clé d'image - Google Patents

Apprentissage d'un modèle d'extraction de point clé d'image et extraction de point clé d'image Download PDF

Info

Publication number
WO2020134010A1
WO2020134010A1 PCT/CN2019/094740 CN2019094740W WO2020134010A1 WO 2020134010 A1 WO2020134010 A1 WO 2020134010A1 CN 2019094740 W CN2019094740 W CN 2019094740W WO 2020134010 A1 WO2020134010 A1 WO 2020134010A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
key point
model
point extraction
sub
Prior art date
Application number
PCT/CN2019/094740
Other languages
English (en)
Chinese (zh)
Inventor
喻冬东
王长虎
Original Assignee
北京字节跳动网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字节跳动网络技术有限公司 filed Critical 北京字节跳动网络技术有限公司
Publication of WO2020134010A1 publication Critical patent/WO2020134010A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features

Definitions

  • the present disclosure relates to the field of image processing, and in particular, to training of image key point extraction models and image key point extraction.
  • the key points of the image are usually extracted through a convolutional neural network, and the labeled image is used to uniformly train the image key point extraction model.
  • the difference in image clarity or the shooting environment may result in difficulty in extracting key points in the image. Therefore, when using such images for unified training, the obtained network has less applicability and lower accuracy.
  • a training method for an image key point extraction model includes a plurality of cascaded sub-models, the method includes:
  • For each sub-model determine the difference between the key points output by the sub-model and the key points in the training image corresponding to the degree identifier of the sub-model, where the degree identifier is used to characterize the difficulty of key point extraction degree;
  • the sum of the differences corresponding to the sub-models is determined as the target difference of the image key point extraction model, and when the training times of the image key point extraction model does not reach the preset number, the The image key point extraction model is described.
  • the input of the first sub-model in the image key point extraction model is a feature map of the human body image portion in the training image, and the image key-point extraction model except the first sub-model
  • the input of the external sub-model is the key points output by the previous sub-model and the feature map of the human image part in the training image.
  • the feature map of the human image part in the training image is determined in the following manner:
  • the resolution corresponding to the first image is adjusted to a preset resolution, a second image is obtained, and the feature map of the human body image part in the training image is determined according to the second image.
  • an image key point extraction method including:
  • the extraction model includes multiple cascaded sub-models, and the image key point extraction model is obtained by training according to the method of the first aspect.
  • a training device for extracting a model of an image key point the image key point extraction model includes a plurality of cascaded sub-models, and the device includes:
  • the processing module is used to input the training image into the image key point extraction model to obtain the key points output by each sub-model as a training for the image key point extraction model;
  • the first determining module is used to determine, for each sub-model, the difference between the key points output by the sub-model and the key points in the training image corresponding to the degree identifier of the sub-model, wherein the degree identifier is used to Characterize the difficulty of key point extraction;
  • the update module is used to determine the sum of the differences corresponding to the respective sub-models as the target difference of the image key point extraction model.
  • the The target difference updates the image key point extraction model.
  • the processing module inputs the training image into the updated image key point extraction model to obtain the key points output by each sub-model until the image The training times of the key point extraction model reach the preset times.
  • the input of the first sub-model in the image key point extraction model is a feature map of the human body image portion in the training image, and the image key-point extraction model except the first sub-model
  • the input of the external sub-model is the key points output by the previous sub-model and the feature map of the human image part in the training image.
  • the device further includes a feature extraction module for obtaining a feature map of a human image part in the training image, the feature extraction module includes:
  • the adjustment sub-module is used to adjust the resolution corresponding to the first image to a preset resolution, obtain a second image, and determine the feature map of the human body image portion in the training image according to the second image.
  • an image key point extraction device comprising:
  • the receiving module is used to receive a target image, the target image includes a human body image portion;
  • a second determining module configured to input the target image into an image key point extraction model, and determine the key point output by the last sub-model of the image key point extraction model as the key point of the human body image part in the target image,
  • the image key point extraction model includes multiple cascaded sub-models, and the image key point extraction model is obtained by training according to the method of the first aspect.
  • a computer-readable storage medium on which a computer program is stored, which when executed by a processor implements the method of the first aspect described above.
  • a computer-readable storage medium on which a computer program is stored, which when executed by a processor implements the method of the second aspect described above.
  • an electronic device including:
  • a processor is configured to execute the computer program in the memory to implement the method of the first aspect.
  • an electronic device including:
  • a processor is configured to execute the computer program in the memory to implement the method of the second aspect.
  • each sub-model of the image key point extraction model outputs key points, and the difference is calculated separately for each sub-model, so that each sub-model in the image key point extraction model can focus on corresponding to its degree identification Key points to facilitate the extraction of key points with different degrees of difficulty.
  • each sub-model to determine the target difference of the image key point extraction model, to achieve the update of the image key point extraction model, it can effectively ensure the accuracy of the image key point extraction model, by targeting key points of different degrees of difficulty Separate processing, so as to improve the application range of the image key point extraction model and enhance the user experience.
  • FIG. 1 is a flowchart of a training method for an image keypoint extraction model according to an exemplary embodiment of the present disclosure
  • FIG. 2 is a flowchart of a method of acquiring a feature map of a human image portion in a training image according to an exemplary embodiment of the present disclosure
  • FIG. 3 is a flowchart of an image key point extraction method according to an exemplary embodiment of the present disclosure
  • FIG. 4 is a block diagram of a training device for an image keypoint extraction model according to an exemplary embodiment of the present disclosure
  • FIG. 5 is a block diagram of an image keypoint extraction device according to an exemplary embodiment of the present disclosure.
  • FIG. 6 is a block diagram of an electronic device according to an exemplary embodiment of the present disclosure.
  • FIG. 7 is a block diagram of an electronic device according to an exemplary embodiment of the present disclosure.
  • FIG. 1 is a flowchart of a training method for an image keypoint extraction model according to an exemplary embodiment of the present disclosure, the image keypoint extraction model including multiple cascaded sub-models.
  • step S11 the training image is input to the image key point extraction model, and the key points output by each sub-model are obtained as a training for the image key point extraction model.
  • a large number of images can be obtained from a database or the Internet. After that, the key points in the image are marked to determine the training image.
  • step S12 for each sub-model, the difference between the key points output by the sub-model and the key points corresponding to the degree identifier of the sub-model in the training image is determined, where the degree identifier is used to characterize the key point extraction Degree of difficulty.
  • the difficulty of extracting each key point may be marked.
  • One-level identification, the first-level identification is used to characterize the extraction of the key point is relatively simple. It is difficult to extract the key points of the human body image part in the blurred, low-resolution training image.
  • the key points in the training image can be marked with a second degree identifier, which is used to characterize the extraction and comparison of the key points difficult.
  • different key points in the training image may be directly labeled with degree identifiers.
  • the key points that are more difficult to extract from the training image are labeled with the second degree identification
  • the key points that are easier to extract from the training image are labeled with the first degree identification.
  • the degree indicator corresponding to the first sub-model is a first degree indicator
  • the degree indicator corresponding to the next sub-model is a second degree indicator.
  • the difference between the two key points is determined according to the key point output by the next sub-model and the key point corresponding to the second degree identifier in the training image. Therefore, when determining the difference corresponding to each sub-model, the sub-model can only focus on the key points corresponding to the degree identification in the sub-model.
  • step S13 the sum of the differences corresponding to the sub-models is determined as the target difference of the image key point extraction model.
  • the image key point extraction is updated according to the target difference model.
  • the difference corresponding to each sub-model can be used to characterize the accuracy of extracting the key points identified by the corresponding degree of the sub-model. The smaller the difference, the more accurate the extraction of characterizing key points.
  • the sum of the differences corresponding to each sub-model can be determined as the target difference of the image key point extraction model, then the differences of the key point extraction model of the image can be comprehensively characterized according to the differences corresponding to each sub-model , So that the key point extraction model of the image can be updated according to the target difference.
  • the preset number of times may be set according to actual usage scenarios. For example, in a scene with higher accuracy requirements, the preset number of times may be set to be larger; in a scene with general accuracy requirements, the preset number of times may be set to be smaller.
  • each sub-model of the model extracts key points through image key point extraction, and the difference is calculated separately for each sub-model, so that each sub-model in the image key point extraction model can focus on its degree Identify the corresponding key points, so as to facilitate the extraction of key points with different degrees of difficulty.
  • the difference of each sub-model to determine the target difference of the image key point extraction model to achieve the update of the image key point extraction model, it can effectively ensure the accuracy of the image key point extraction model, by targeting key points of different degrees of difficulty Separate processing, so as to improve the application range of the image key point extraction model and enhance the user experience.
  • the image key point extraction model after the image key point extraction model is updated, it may return to step S11 until the training times of the image key point extraction model reaches a preset number of times.
  • updating the image key point extraction model refers to adjusting the weight parameters in the image key point extraction model according to the target difference, which can be implemented through the existing neural network feedback update method, which will not be repeated here.
  • the training image used when returning to the step of inputting the training image into the key point extraction model of the image to obtain the key points output by each sub-model, the training image used may be the training image used before or a new one
  • the training image is not limited in this disclosure.
  • the training process of the image key point extraction model is completed to obtain an accurate image key point extraction model, thereby providing support for the extraction of the image key point.
  • the input of the first sub-model in the image key point extraction model is the feature map of the human body image part in the training image
  • the sub-models in the image key point extraction model other than the first sub-model The input of is the key points output by the previous sub-model and the feature map of the human image part in the training image.
  • the sub-models other than the first sub-model are extracted from the model, and the inputs are the key points output by the previous sub-model and the feature map of the human image part in the training image. Therefore, when the current sub-model performs key point extraction, it can be determined based on the key points output by the previous sub-model, which can effectively simplify the image key point extraction process, avoid repeated data processing and calculation, and improve the image key point extraction model s efficiency.
  • the feature map of the human image part in the training image is determined in the following manner, as shown in FIG. 2:
  • step S21 the first image corresponding to the human body image part of the training image is extracted, wherein the first image can be extracted by an existing human body recognition extraction algorithm.
  • the human body image in the training image may be extracted through the faster-rcnn algorithm or the maskrcnn algorithm.
  • step S22 the resolution corresponding to the first image is adjusted to a preset resolution, a second image is obtained, and the feature map of the human image portion in the training image is determined according to the second image.
  • the corresponding proportions of human image parts in different training images may be the same or different.
  • the training images are obtained by the same user through continuous shooting, where the proportions corresponding to the human body image parts are generally similar, and for images taken by different users, the proportions corresponding to the human body image parts are generally different. Therefore, in order to facilitate uniform processing of the human body image portion in the training image.
  • the resolution of the first image may be adjusted to a preset resolution to obtain the second image.
  • the preset resolution may be 400*600.
  • the resolution of the first image can be made 400*600 by enlarging the image; when the resolution of the extracted first image is greater than the preset For resolution, the resolution of the first image can be reduced to 400*600 by reducing the image.
  • the way to enlarge or reduce the image is the prior art, and will not be repeated here.
  • feature maps with the same resolution can be extracted from different training images, which facilitates uniform processing of the feature maps, effectively simplifies the processing flow, and increases the processing speed. At the same time, it meets the user's needs and is convenient for users.
  • An embodiment of the present disclosure also provides an image key point extraction method. As shown in FIG. 3, the method includes:
  • step S31 a target image is received, and the target image contains a human body image part, wherein the human body image in the target image can be detected by a faster-rcnn algorithm or a maskrcnn algorithm.
  • step S32 the target image is input to the image key point extraction model, and the key points output by the last sub-model of the image key point extraction model are determined as the key points of the human body image part in the target image, where the image key point extraction model includes For multiple cascaded sub-models, the image key point extraction model is trained according to any of the above training methods for the image key point extraction model.
  • the image key point extraction model by inputting the target image to the image key point extraction model, key points in the target image can be extracted.
  • the key point extraction model based on the image can accurately extract key points of different degrees of difficulty in the target image. On the one hand, it can ensure the comprehensiveness and completeness of key point extraction, on the other hand, it can also effectively ensure the extraction of key points.
  • the accuracy of the system provides accurate data support for subsequent processing based on this key point, and further improves the user experience.
  • the key points of the human body image part are the bone key points corresponding to the human body image part.
  • the key points in the target image may be determined according to the bone key points Posture estimation is performed on the part of the human body image. Therefore, the prediction accuracy of the bone key points corresponding to the human body image part can be improved, thereby ensuring the accuracy of the pose estimation of the human body image part in the target image.
  • An embodiment of the present disclosure also provides a training device for extracting a model of an image key point.
  • the image key point extraction model includes multiple cascaded sub-models.
  • the device 10 may include:
  • the processing module 100 is used to input a training image into an image key point extraction model to obtain key points output by each sub-model as a training for the image key point extraction model;
  • the first determination module 200 is used to determine, for each sub-model, the difference between the key points output by the sub-model and the key points in the training image corresponding to the degree identification of the sub-model, where the degree identification is used to characterize the key point Difficulty of extraction;
  • the update module 300 is used to determine the sum of the differences corresponding to the sub-models as the target difference of the image key point extraction model. When the training times of the image key point extraction model do not reach the preset number, update the image key point according to the target difference Extract the model.
  • the processing module may input the training image into the updated image key point extraction model to obtain the key points output by each sub-model until the image key points The training times of the extracted model have reached the preset times.
  • the input of the first sub-model in the image key point extraction model is the feature map of the human body image part in the training image
  • the sub-models in the image key point extraction model other than the first sub-model The input of is the key points output by the previous sub-model and the feature map of the human image part in the training image.
  • the apparatus may further include a feature extraction module for obtaining a feature map of the human body image part in the training image.
  • the feature extraction module may include:
  • the adjustment submodule is used to adjust the resolution corresponding to the first image to a preset resolution, obtain a second image, and determine the feature map of the human image portion in the training image according to the second image.
  • the device 20 may include:
  • the receiving module 400 is used to receive a target image, and the target image includes a human body image part;
  • the second determination module 500 is used to input the target image into the image keypoint extraction model, and determine the keypoint output by the last sub-model of the image keypoint extraction model as the keypoint of the human image part in the target image, where the image keypoint
  • the extraction model includes multiple cascaded sub-models.
  • the image key point extraction model is obtained by training according to any of the above training methods for the image key point extraction model.
  • FIG. 6 is a block diagram of an electronic device 700 according to an embodiment of the present disclosure.
  • the electronic device 700 may include a processor 701 and a memory 702.
  • the electronic device 700 may further include one or more of a multimedia component 703, an input/output (I/O) interface 704, and a communication component 705.
  • a multimedia component 703 an input/output (I/O) interface 704
  • the processor 701 is used to control the overall operation of the electronic device 700 to complete all or part of the steps in the training method of the image key point extraction model or the image key point extraction method.
  • the memory 702 is used to store various types of data to support operation on the electronic device 700, and the data may include, for example, instructions for any application or method for operating on the electronic device 700, and application-related data, For example, contact data, messages sent and received, pictures, audio, video, etc.
  • the memory 702 may be implemented by any type of volatile or non-volatile storage device or a combination thereof, such as static random access memory (Static Random Access Memory, SRAM), electrically erasable programmable read-only memory (Electrically Erasable Programmable Read-Only Memory (EEPROM), Erasable Programmable Read-Only Memory (EPROM), Programmable Read-Only Memory (PROM), Read-Only Memory (Read -Only Memory (ROM for short), magnetic memory, flash memory, magnetic disk or optical disk.
  • SRAM static random access memory
  • EEPROM Electrically erasable programmable read-only memory
  • EPROM Erasable Programmable Read-Only Memory
  • PROM Programmable Read-Only Memory
  • Read -Only Memory Read-Only Memory
  • the multimedia component 703 may include a screen and an audio component.
  • the screen may be, for example, a touch screen, and the audio component is used to output and/or input audio signals.
  • the audio component may include a microphone for receiving
  • the received audio signal may be further stored in the memory 702 or transmitted through the communication component 705.
  • the audio component also includes at least one speaker for outputting audio signals.
  • the I/O interface 704 provides an interface between the processor 701 and other interface modules.
  • the other interface modules may be a keyboard, a mouse, a button, and so on. These buttons can be virtual buttons or physical buttons.
  • the communication component 705 is used for wired or wireless communication between the electronic device 700 and other devices. Wireless communication, such as Wi-Fi, Bluetooth, Near Field Communication (NFC), 2G, 3G, 4G, NB-IOT, eMTC, or 5G, etc., or a combination of one or more of them, in This is not limited. Therefore, the corresponding communication component 707 may include a Wi-Fi module, a Bluetooth module, an NFC module, and so on.
  • the electronic device 700 may be one or more application specific integrated circuits (Application Specific Integrated Circuit (ASIC), digital signal processor (Digital Signal Processor, DSP), digital signal processing device (Digital Signal Processing Device, DSPD), programmable logic device (Programmable Logic Device, PLD), field programmable gate array (Field Programmable Gate Array, FPGA), controller, microcontroller, microprocessor, or other electronic components for implementation
  • ASIC Application Specific Integrated Circuit
  • DSP Digital Signal Processor
  • DSPD digital signal processing device
  • PLD programmable logic device
  • FPGA field programmable gate array
  • controller microcontroller, microprocessor, or other electronic components for implementation
  • microcontroller microprocessor
  • a computer-readable storage medium including program instructions is also provided.
  • the program instructions are executed by a processor, the above-mentioned image key point extraction model training method or image key point extraction method is implemented.
  • the computer-readable storage medium may be the above-mentioned memory 702 including program instructions, and the above-mentioned program instructions may be executed by the processor 701 of the electronic device 700 to implement the above training method or image key point extraction method for the image key point extraction model.
  • the electronic device 1900 may be provided as a server. 7, the electronic device 1900 may include: a processor 1922, the number of which may be one or more; and a memory 1932 for storing a computer program executable by the processor 1922.
  • the computer program stored in the memory 1932 may include one or more modules each corresponding to a set of instructions.
  • the processor 1922 may be configured to execute the computer program to perform the above-mentioned training method of the image key point extraction model or image key point extraction method.
  • the electronic device 1900 may further include a power supply component 1926 and a communication component 1950, which may be configured to perform power management of the electronic device 1900, and the communication component 1950 may be configured to implement communication of the electronic device 1900, for example, wired Or wireless communication.
  • the electronic device 1900 may also include an input/output (I/O) interface 1958.
  • the electronic device 1900 can operate an operating system based on the memory 1932, such as Windows Server TM , Mac OS X TM , Unix TM , Linux TM, and so on.
  • a computer-readable storage medium including program instructions is also provided.
  • the program instructions are executed by a processor, the above method for training an image keypoint extraction model or image keypoint extraction method is implemented .
  • the computer-readable storage medium may be the above-mentioned memory 1932 including program instructions, and the above-mentioned program instructions may be executed by the processor 1922 of the electronic device 1900 to complete the above training method or image key point extraction method for the image key point extraction model.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

L'invention concerne un procédé d'apprentissage d'un modèle d'extraction de point clé d'image. Le modèle d'extraction de point clé d'image comprend de multiples sous-modèles en cascade. Le procédé consiste à : entrer une image d'apprentissage dans le modèle d'extraction de point clé d'image, obtenir un point clé délivré en sortie par chaque sous-modèle, et prendre les points clés en tant qu'apprentissage primaire du modèle d'extraction de point clé d'image (S11) ; pour chaque sous-modèle, déterminer une différence entre le point clé délivré en sortie par le sous-modèle et le point clé correspondant à l'identifiant de degré du sous-modèle dans l'image d'apprentissage (S12), l'identifiant de degré étant utilisé pour caractériser le niveau de difficulté d'extraction de point clé ; et déterminer la somme des différences correspondant à tous les sous-modèles pour être la différence cible du modèle d'extraction de point clé d'image, et lorsque le nombre de moments d'apprentissage du modèle d'extraction de point clé d'image n'atteint pas un nombre prédéfini de moments, il convient de mettre à jour le modèle d'extraction de point clé d'image en fonction de la différence cible (S13). En traitant respectivement les points clés ayant différents niveaux de difficulté, la présente invention peut améliorer la précision et la plage applicable du modèle d'extraction de point clé d'image.
PCT/CN2019/094740 2018-12-27 2019-07-04 Apprentissage d'un modèle d'extraction de point clé d'image et extraction de point clé d'image WO2020134010A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811615301.XA CN109753910B (zh) 2018-12-27 2018-12-27 关键点提取方法、模型的训练方法、装置、介质及设备
CN201811615301.X 2018-12-27

Publications (1)

Publication Number Publication Date
WO2020134010A1 true WO2020134010A1 (fr) 2020-07-02

Family

ID=66404087

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/094740 WO2020134010A1 (fr) 2018-12-27 2019-07-04 Apprentissage d'un modèle d'extraction de point clé d'image et extraction de point clé d'image

Country Status (2)

Country Link
CN (1) CN109753910B (fr)
WO (1) WO2020134010A1 (fr)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112053360A (zh) * 2020-10-10 2020-12-08 腾讯科技(深圳)有限公司 图像分割方法、装置、计算机设备及存储介质
CN112270669A (zh) * 2020-11-09 2021-01-26 北京百度网讯科技有限公司 人体3d关键点检测方法、模型训练方法及相关装置
CN112614568A (zh) * 2020-12-28 2021-04-06 东软集团股份有限公司 检查图像的处理方法、装置、存储介质和电子设备
CN113762096A (zh) * 2021-08-18 2021-12-07 东软集团股份有限公司 健康码识别方法、装置、存储介质及电子设备
CN114518801A (zh) * 2022-02-18 2022-05-20 美的集团(上海)有限公司 设备控制方法、计算机程序产品、控制设备和存储介质
CN117079242A (zh) * 2023-09-28 2023-11-17 比亚迪股份有限公司 减速带确定方法、装置、存储介质、电子设备及车辆

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109753910B (zh) * 2018-12-27 2020-02-21 北京字节跳动网络技术有限公司 关键点提取方法、模型的训练方法、装置、介质及设备
CN113468924B (zh) * 2020-03-31 2024-06-18 北京沃东天骏信息技术有限公司 关键点检测模型训练方法和装置、关键点检测方法和装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104077613A (zh) * 2014-07-16 2014-10-01 电子科技大学 一种基于级联多级卷积神经网络的人群密度估计方法
CN107665351A (zh) * 2017-05-06 2018-02-06 北京航空航天大学 基于难样本挖掘的机场检测方法
WO2018052587A1 (fr) * 2016-09-14 2018-03-22 Konica Minolta Laboratory U.S.A., Inc. Procédé et système de segmentation d'image de cellule à l'aide de réseaux neuronaux convolutifs à étages multiples
CN107909053A (zh) * 2017-11-30 2018-04-13 济南浪潮高新科技投资发展有限公司 一种基于等级学习级联卷积神经网络的人脸检测方法
CN109753910A (zh) * 2018-12-27 2019-05-14 北京字节跳动网络技术有限公司 关键点提取方法、模型的训练方法、装置、介质及设备

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105404861B (zh) * 2015-11-13 2018-11-02 中国科学院重庆绿色智能技术研究院 人脸关键特征点检测模型的训练、检测方法及***
CN106295567B (zh) * 2016-08-10 2019-04-12 腾讯科技(深圳)有限公司 一种关键点的定位方法及终端
CN106845398B (zh) * 2017-01-19 2020-03-03 北京小米移动软件有限公司 人脸关键点定位方法及装置
KR101993729B1 (ko) * 2017-02-15 2019-06-27 동명대학교산학협력단 다중채널 가버 필터와 중심대칭지역 이진 패턴기반 얼굴인식기술
CN106951840A (zh) * 2017-03-09 2017-07-14 北京工业大学 一种人脸特征点检测方法
CN108230390B (zh) * 2017-06-23 2021-01-01 北京市商汤科技开发有限公司 训练方法、关键点检测方法、装置、存储介质和电子设备
CN108960232A (zh) * 2018-06-08 2018-12-07 Oppo广东移动通信有限公司 模型训练方法、装置、电子设备和计算机可读存储介质
CN109063584B (zh) * 2018-07-11 2022-02-22 深圳大学 基于级联回归的面部特征点定位方法、装置、设备及介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104077613A (zh) * 2014-07-16 2014-10-01 电子科技大学 一种基于级联多级卷积神经网络的人群密度估计方法
WO2018052587A1 (fr) * 2016-09-14 2018-03-22 Konica Minolta Laboratory U.S.A., Inc. Procédé et système de segmentation d'image de cellule à l'aide de réseaux neuronaux convolutifs à étages multiples
CN107665351A (zh) * 2017-05-06 2018-02-06 北京航空航天大学 基于难样本挖掘的机场检测方法
CN107909053A (zh) * 2017-11-30 2018-04-13 济南浪潮高新科技投资发展有限公司 一种基于等级学习级联卷积神经网络的人脸检测方法
CN109753910A (zh) * 2018-12-27 2019-05-14 北京字节跳动网络技术有限公司 关键点提取方法、模型的训练方法、装置、介质及设备

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112053360A (zh) * 2020-10-10 2020-12-08 腾讯科技(深圳)有限公司 图像分割方法、装置、计算机设备及存储介质
CN112053360B (zh) * 2020-10-10 2023-07-25 腾讯科技(深圳)有限公司 图像分割方法、装置、计算机设备及存储介质
CN112270669A (zh) * 2020-11-09 2021-01-26 北京百度网讯科技有限公司 人体3d关键点检测方法、模型训练方法及相关装置
CN112270669B (zh) * 2020-11-09 2024-03-01 北京百度网讯科技有限公司 人体3d关键点检测方法、模型训练方法及相关装置
CN112614568A (zh) * 2020-12-28 2021-04-06 东软集团股份有限公司 检查图像的处理方法、装置、存储介质和电子设备
CN112614568B (zh) * 2020-12-28 2024-05-28 东软集团股份有限公司 检查图像的处理方法、装置、存储介质和电子设备
CN113762096A (zh) * 2021-08-18 2021-12-07 东软集团股份有限公司 健康码识别方法、装置、存储介质及电子设备
CN114518801A (zh) * 2022-02-18 2022-05-20 美的集团(上海)有限公司 设备控制方法、计算机程序产品、控制设备和存储介质
CN114518801B (zh) * 2022-02-18 2023-10-27 美的集团(上海)有限公司 设备控制方法、控制设备和存储介质
CN117079242A (zh) * 2023-09-28 2023-11-17 比亚迪股份有限公司 减速带确定方法、装置、存储介质、电子设备及车辆
CN117079242B (zh) * 2023-09-28 2024-01-26 比亚迪股份有限公司 减速带确定方法、装置、存储介质、电子设备及车辆

Also Published As

Publication number Publication date
CN109753910B (zh) 2020-02-21
CN109753910A (zh) 2019-05-14

Similar Documents

Publication Publication Date Title
WO2020134010A1 (fr) Apprentissage d'un modèle d'extraction de point clé d'image et extraction de point clé d'image
CN109961780B (zh) 一种人机交互方法、装置、服务器和存储介质
US20200043471A1 (en) Voice data processing method, voice interaction device, and storage medium
CN106778820B (zh) 识别模型确定方法及装置
CN109492531B (zh) 人脸图像关键点提取方法、装置、存储介质及电子设备
BR112016021658B1 (pt) Método de apresentação de conteúdo, método de promover modo de apresentação de conteúdo, e terminal inteligente
WO2019019396A1 (fr) Procédé et appareil de prédiction de résultat de pousser, dispositif informatique, et support de stockage
CN109657539B (zh) 人脸颜值评价方法、装置、可读存储介质及电子设备
WO2020006762A1 (fr) Procédé de formation d'un modèle de restauration d'image, procédé et appareil de restauration d'image, support et dispositif associés
US10291838B2 (en) Focusing point determining method and apparatus
WO2020207024A1 (fr) Procédé de gestion d'autorité et produit associé
JP6309539B2 (ja) 音声入力を実現する方法および装置
CN109658346B (zh) 图像修复方法、装置、计算机可读存储介质及电子设备
US11301669B2 (en) Face recognition system and method for enhancing face recognition
CN109697446B (zh) 图像关键点提取方法、装置、可读存储介质及电子设备
JP7024255B2 (ja) 情報処理装置及びプログラム
CN110427849B (zh) 人脸姿态确定方法、装置、存储介质和电子设备
WO2019144710A1 (fr) Procédé et appareil permettant de déterminer la position d'une pupille
WO2020103606A1 (fr) Procédé et dispositif de traitement de modèle, terminal et support de stockage
JP5430636B2 (ja) データ取得装置、方法及びプログラム
CN111652382B (zh) 基于区块链的数据处理方法、装置、设备及存储介质
CN110288668B (zh) 图像生成方法、装置、计算机设备及存储介质
KR101647911B1 (ko) 이미지 복원에 의한 모바일 인증
CN115563377B (zh) 企业的确定方法、装置、存储介质及电子设备
CN107562204B (zh) 电视交互方法、电视及计算机可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19901616

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19901616

Country of ref document: EP

Kind code of ref document: A1