WO2021051650A1 - 人脸和人手关联检测方法及装置、电子设备和存储介质 - Google Patents
人脸和人手关联检测方法及装置、电子设备和存储介质 Download PDFInfo
- Publication number
- WO2021051650A1 WO2021051650A1 PCT/CN2019/120901 CN2019120901W WO2021051650A1 WO 2021051650 A1 WO2021051650 A1 WO 2021051650A1 CN 2019120901 W CN2019120901 W CN 2019120901W WO 2021051650 A1 WO2021051650 A1 WO 2021051650A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- feature
- feature map
- human
- map
- Prior art date
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 97
- 238000000034 method Methods 0.000 title claims abstract description 75
- 238000003860 storage Methods 0.000 title claims abstract description 28
- 238000000605 extraction Methods 0.000 claims abstract description 43
- 238000007499 fusion processing Methods 0.000 claims abstract description 31
- 238000012545 processing Methods 0.000 claims description 68
- 238000012549 training Methods 0.000 claims description 67
- 238000013528 artificial neural network Methods 0.000 claims description 55
- 230000004927 fusion Effects 0.000 claims description 22
- 238000004590 computer program Methods 0.000 claims description 18
- 230000008569 process Effects 0.000 claims description 13
- 238000002372 labelling Methods 0.000 claims description 7
- 238000009826 distribution Methods 0.000 claims description 3
- 230000000875 corresponding effect Effects 0.000 description 75
- 238000010586 diagram Methods 0.000 description 34
- 230000006870 function Effects 0.000 description 16
- 210000004247 hand Anatomy 0.000 description 12
- 238000004891 communication Methods 0.000 description 10
- 238000005516 engineering process Methods 0.000 description 10
- 238000005070 sampling Methods 0.000 description 9
- 230000008859 change Effects 0.000 description 6
- 238000013527 convolutional neural network Methods 0.000 description 6
- 230000009471 action Effects 0.000 description 5
- 230000005540 biological transmission Effects 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 4
- 230000005236 sound signal Effects 0.000 description 4
- 210000000707 wrist Anatomy 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 2
- 210000000746 body region Anatomy 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 210000005069 ears Anatomy 0.000 description 2
- 230000007613 environmental effect Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 230000001902 propagating effect Effects 0.000 description 2
- 230000002441 reversible effect Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- RYGMFSIKBFXOCR-UHFFFAOYSA-N Copper Chemical compound [Cu] RYGMFSIKBFXOCR-UHFFFAOYSA-N 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 229910052802 copper Inorganic materials 0.000 description 1
- 239000010949 copper Substances 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000000670 limiting effect Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000036961 partial effect Effects 0.000 description 1
- 230000002829 reductive effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
- G06V40/173—Classification, e.g. identification face re-identification, e.g. recognising unknown faces across different face tracks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/107—Static hand or arm
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20016—Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Definitions
- the present disclosure relates to the field of computer vision technology, and in particular to a method and device for the position of a human face and a human hand, an electronic device, and a storage medium.
- Human face and hand association refers to associating the detected face with the human hand, so that a certain operation performed by the human hand can be associated with a specific person through the associated information.
- the present disclosure proposes a technical solution for detecting human faces and human hands in image processing.
- a method for detecting a human face and a human hand which includes: acquiring a first image, where the first image is an image of a human object; performing feature extraction on the first image to obtain multiple A first feature map of multiple scales; performing feature fusion processing on the first feature map of multiple scales to obtain a second feature map of multiple scales, and the scale of the second feature map is the same as that of the first feature map Scale one-to-one correspondence; based on the obtained second feature maps of the multiple scales, the associated face position and human hand position for the same person object in the first image are detected.
- the embodiment of the present disclosure can easily and conveniently obtain the human face and the human hand related to each other in the image, and at the same time can improve the detection accuracy.
- the acquiring the first image includes: acquiring the second image, where the second image is an image including at least one human object; performing human target detection on the second image to obtain The detection frame of any one of the at least one person object in the first image; determining the corresponding image area of the detection frame of any one of the human objects in the second image as the any The first image of a character object.
- the first image obtained by the embodiment of the present disclosure removes the influence of other environmental factors, which can further improve the detection accuracy.
- the performing feature extraction on the first image to obtain a first feature map of multiple scales includes: adjusting the first image to a third image of a preset scale; The third image is input to the residual network to obtain the first feature maps of the multiple scales. Based on the above configuration, the scale of the image can be unified and the applicability can be improved.
- the performing feature fusion processing on the first feature maps of the multiple scales to obtain the second feature maps of the multiple scales includes: inputting the first feature maps of the multiple scales To the feature pyramid network, the feature fusion processing is performed through the feature pyramid network to obtain the second feature maps of the multiple scales. Based on the above configuration, the feature accuracy of the obtained second feature map of multiple scales can be improved.
- the multiple first feature maps are represented as ⁇ C 1 ,...,C n ⁇ , where n represents the number of first feature maps, and n Is an integer greater than 1;
- the performing feature fusion processing on the first feature maps of the multiple scales to obtain the second feature maps of multiple scales includes: using the first convolution kernel to perform convolution on the first feature map C n product treatment, to obtain a second characteristic to the first characteristic of FIG.
- FIG C n corresponding to F n, wherein said first dimension feature map of the second C n F n in FIG feature scale; the second feature map F n performs linear interpolation processing to obtain the second feature map F n corresponding to the first intermediate feature FIGS F 'n, wherein the first intermediate feature map F' n first feature map scale the same dimensions as C n-1; the use of the first feature a second convolution collation FIG C than the first characteristic graph C n i convolution processing, to obtain the first characteristic corresponding to FIG.
- the map F i includes: adding the second intermediate feature map C'i and the corresponding first intermediate feature map F'i+1 to obtain the second feature map F i . Based on the above configuration, the feature information of the two intermediate features can be effectively merged.
- the detecting the associated face position and human hand position of the same person object in the first image based on the obtained second feature maps of the multiple scales includes: The second feature map with the largest scale in the second feature map of two scales performs convolution processing to obtain a mask map representing the position of the face and a mask map representing the position of the hand; based on the face position The mask image of and the mask image of the position of the human hand determine the location area where the human hand and face associated in the first image are located. Based on the above configuration, it is possible to conveniently predict and indicate the position of the associated face and hand.
- the scale relationship between the first feature maps of the multiple scales is: And Wherein, C i represents each first feature map, L(C i ) represents the length of the first feature map C i , W(C i ) represents the width of the first feature map C i , and k 1 is greater than or An integer equal to 1, i is a variable, and the range of i is [2, n], and n represents the number of first feature maps.
- the method further includes at least one of the following ways: highlighting the associated human hand and face in the first image; and displaying the association detected in the first image
- the position of the face and the position of the hand are assigned the same label. Based on the above configuration, the image area where the associated human face and hand are located can be intuitively reflected, and at the same time, the associated detection results of different human objects can be effectively distinguished.
- the method is implemented by a neural network, wherein the step of training the neural network includes: obtaining a training image, the training image is an image including a person object, and the training image has real associations The annotation information of the face position and the hand position; the training image is input to the neural network, and the neural network predicts the associated face position and the hand position of the same person object in the training image; based on the prediction The associated face position and hand position and the label information determine the network loss, and adjust the network parameters of the neural network according to the network loss until the training requirements are met. Based on the above configuration, optimized training of the neural network can be realized to ensure the accuracy of network detection.
- an associated detection device for a human face and a human hand which includes: an acquisition module for acquiring a first image, where the first image is an image of a human object; and a feature extraction module for Perform feature extraction on the first image to obtain first feature maps of multiple scales; a fusion module for performing feature fusion processing on the first feature maps of multiple scales to obtain second feature maps of multiple scales ,
- the scale of the second feature map corresponds to the scale of the first feature map in a one-to-one correspondence; the detection module is configured to detect the same person in the first image based on the obtained second feature maps of the multiple scales The associated face position and hand position of the object.
- the acquisition module includes: an acquisition unit, configured to acquire the second image, where the second image is an image including at least one person object; and a target detection unit, configured to Perform human target detection on the second image to obtain the detection frame of any one of the at least one human object in the first image; the determining unit is configured to place the detection frame of any one of the human objects in the first image The corresponding image area in the two images is determined to be the first image of any one of the human objects.
- the feature extraction module is further configured to adjust the first image to a third image of a preset scale; input the third image to the residual network to obtain the multiple scales The first feature map.
- the fusion unit is further configured to input the first feature maps of the multiple scales into a feature pyramid network, and perform the feature fusion processing through the feature pyramid network to obtain the multiple The second feature map of the scale.
- the multiple first feature maps are represented as ⁇ C 1 ,...,C n ⁇ , where n represents the number of first feature maps, and n is an integer greater than 1;
- the fusion module is further configured to check a first convolution using FIG C n wherein the convolution processing to obtain a second characteristic to the first characteristic of FIG.
- FIG C n corresponding to F n, wherein , dimensions of the first feature with the FIG C n F and n second feature map scale;
- FIG feature of the second F n processing performs linear interpolation to obtain the second characteristic corresponds to F n of FIG.
- FIG. 1 wherein an intermediate F 'n, wherein the first intermediate feature map F' n is the same scale dimensions of the first feature of the FIG. C n-1; using the first feature a second convolution FIG collation than C n wherein a first convolution processing in FIG C i, to obtain the first feature FIG C i corresponding to a second intermediate characteristic graph C 'i, said second intermediate feature FIG C' scale i is the first intermediate feature FIG.
- F′ i+1 has the same scale, where i is an integer variable greater than or equal to 1 and less than n; use the second intermediate feature map C'i and the corresponding first intermediate feature map F'i+1 A second feature map F i other than the second feature map F n is obtained , wherein the first intermediate feature map F'i+1 is obtained by linear interpolation of the corresponding second feature map F i+1.
- the fusion module is further configured to add the second intermediate feature map C'i and the corresponding first intermediate feature map F'i+1 to obtain the first intermediate feature map F'i+1.
- Two feature map F i is further configured to add the second intermediate feature map C'i and the corresponding first intermediate feature map F'i+1 to obtain the first intermediate feature map F'i+1.
- the detection module is further configured to perform convolution processing on the second feature map with the largest scale in the second feature maps of the multiple scales to obtain masks representing the positions of the faces.
- Figure, and a mask diagram showing the position of the human hand based on the mask diagram of the position of the human face and the mask diagram of the position of the human hand, determine the location area where the human hand and the face are associated in the first image.
- the scale relationship between the first feature maps of the multiple scales is: And Wherein, C i represents each first feature map, L(C i ) represents the length of the first feature map C i , W(C i ) represents the width of the first feature map C i , and k 1 is greater than or An integer equal to 1, i is a variable, and the range of i is [2, n], and n represents the number of first feature maps.
- the device further includes at least one of a display module and an allocation module, wherein the display module is configured to highlight the associated human hand and human face in the first image;
- the allocation module is configured to allocate the same label to the associated face position and human hand position detected in the first image.
- the device includes a neural network, the feature extraction module, the fusion module, and the detection module apply the neural network, and the device further includes a training module for training the neural network.
- Network wherein the step of training the neural network includes: obtaining training images, the training images are images including human objects, and the training images have labeling information that is actually associated with face positions and hand positions; The image is input to the neural network, and the associated face position and hand position of the same person object in the training image are predicted by the neural network; the face position and hand position and the position of the hand based on the predicted association
- the labeling information determines the network loss, and adjusts the network parameters of the neural network according to the network loss until the training requirements are met.
- an electronic device including: a processor; a memory for storing executable instructions of the processor; wherein the processor is configured to call the instructions stored in the memory to Perform the method described in any one of the first aspect.
- a computer-readable storage medium having computer program instructions stored thereon, and when the computer program instructions are executed by a processor, the method described in any one of the first aspect is implemented.
- a computer program including computer readable code, when the computer readable code runs in an electronic device, the processor in the electronic device executes The method described in any one of the first aspect is implemented.
- a first image corresponding to a region where a person object is located can be determined from the first image, and feature extraction processing is performed on the first image to obtain a corresponding feature map, and then multi-scale feature fusion processing is performed on the feature map , To obtain a second feature map of multiple scales, where the second feature map has more accurate feature information than the first feature map.
- the associated human hand and face information in the first image can be obtained. Position, improve the accuracy of face and hand detection.
- the technical solutions of the embodiments of the present disclosure do not need to obtain the key points of the human ear or the wrist, and can directly obtain the positions of the associated human hands and faces in the image, which is simple, convenient and highly accurate.
- Fig. 1 shows a flow chart of a method for detecting a human face and a human hand association according to an embodiment of the present disclosure
- Fig. 2 shows a flow chart of step S10 in a method for detecting the correlation of a face and a hand according to an embodiment of the present disclosure
- Fig. 3 shows a schematic diagram of a second image according to an embodiment of the present disclosure
- FIG. 4 shows a flowchart of step S20 of a method for detecting a human face and a human hand associated with it according to an embodiment of the present disclosure
- Fig. 5 shows a flow chart of step S30 in a method for detecting a human face and a human hand associated according to an embodiment of the present disclosure
- Fig. 6 shows a schematic diagram of a feature extraction and feature fusion process according to an embodiment of the present disclosure
- FIG. 7 shows a flow chart of step S40 in a method for detecting a human face and a human hand associated according to an embodiment of the present disclosure
- FIG. 8 shows a flowchart of training a neural network according to an embodiment of the present disclosure
- FIG. 9 shows a block diagram of a device for detecting the association of a human face and a human hand according to an embodiment of the present disclosure
- FIG. 10 shows a block diagram of an electronic device according to an embodiment of the present disclosure
- Fig. 11 shows a block diagram of another electronic device according to an embodiment of the present disclosure.
- the embodiments of the present disclosure provide a method for detecting the association of a face and a human hand, which can be applied to any image processing device.
- the method can be applied to a terminal device or a server, or can also be applied to other processing devices.
- terminal devices may include user equipment (UE), mobile devices, user terminals, terminals, cellular phones, cordless phones, personal digital assistants (PDAs), handheld devices, computing devices, vehicle-mounted devices, and portable devices. Wearable equipment, etc.
- the method for detecting a human face and a human hand association may be implemented by a processor invoking a computer-readable instruction stored in a memory.
- Fig. 1 shows a flow chart of a method for detecting a face and a hand associated with according to an embodiment of the present disclosure.
- the method for detecting a face with a hand associated with it includes:
- the first image may be an image of a human object, which may include at least one human face and at least one human hand.
- the embodiment of the present disclosure can realize the image of the human hand and human face of the human object in the first image.
- Association detection the association means that the obtained human face and human hand are the same human face and human hand.
- the method of acquiring the first image may include: directly acquiring the first image through an image acquisition device, where the image acquisition device may be a device with an image acquisition function such as a mobile phone, a camera, or a camera.
- the method of acquiring the first image may also include receiving the first image transmitted from another device, or reading the first image from the memory, or the first image may also be an image frame obtained after a frame selection operation is performed from a video stream. This disclosure does not specifically limit this.
- the first image may also be a partial image area of other images.
- the first image may be an image area selected from other images through the received selection information, or it may be a pass target detection
- the human body is detected and the image area obtained by the detection is not specifically limited in the present disclosure.
- S20 Perform feature extraction on the first image to obtain first feature maps of multiple scales
- the embodiment of the present disclosure may perform feature extraction processing on the first image to obtain the first feature map of multiple scales.
- the embodiment of the present disclosure may input the first image to the feature extraction network to obtain The first feature map of multiple scales, where the feature extraction network can be a convolutional neural network, such as a residual network (Res-Net), the feature extraction of the first image is performed through the residual network, and the first image of at least two scales is obtained A feature map.
- the feature map of multiple scales may be obtained by up-sampling or down-sampling the first image, for example, the first feature map of multiple scales may be obtained through different sampling rates.
- S30 Perform feature fusion processing on the first feature maps of multiple scales to obtain second feature maps of multiple scales, and the scales of the second feature maps correspond to the scales of the first feature maps one-to-one;
- feature fusion processing may be performed on the first feature maps of multiple scales to obtain second feature maps of corresponding scales.
- the accuracy of the feature information in each second feature map can be improved through feature fusion, so that the accuracy of the correlation detection between the face and the hand can be further improved.
- the feature fusion processing of the first feature maps of the multiple scales can be performed through a feature pyramid network, where feature information of the first feature maps of adjacent scales can be feature fused, and the feature fusion process can be performed sequentially from small
- the feature information of the first feature map of the scale is fused to the feature information of the first feature map of the large scale, and finally a second feature map that integrates the feature information of the first feature map of all scales can be obtained.
- S40 Based on the obtained second feature maps of the multiple scales, detect the associated face position and human hand position for the same person object in the first image.
- the association detection of the human face and the human hand may be performed based on the second feature maps of the multiple scales.
- convolution processing may be performed on at least one second feature map in the second feature map of each scale, so as to obtain the associated face position and human hand position in the first image.
- the second feature map with the largest scale can be input to the convolutional layer to perform convolution processing to obtain mask maps about the face position and the hand position respectively, which can include a first mask map of the face position, and
- the second mask image at the left hand position and the third mask image at the right hand position can be used to determine the associated human hand and face positions in the first image through the obtained mask images.
- the embodiments of the present disclosure do not need to acquire the key points of human ears or wrists, and also do not need to analyze whether the Gaussian distribution is satisfied, and can directly obtain the associated human hands and hands through the multi-scale extraction and feature fusion of the features of the first image.
- the human face is simple, convenient and highly accurate.
- the first image obtained by the embodiment of the present disclosure may be an image of a person object.
- the obtained image may include multiple person objects, in order to improve the face of the same person object.
- the present disclosure can obtain the image area of each person object from the obtained image, and then perform feature extraction and feature fusion on each image area respectively, and finally obtain the face and hand position of each person object .
- Fig. 2 shows a flow chart of step S10 in a method for detecting the correlation of a face and a hand according to an embodiment of the present disclosure.
- the acquiring the first image includes:
- S101 Acquire a second image, where the second image is an image including at least one person object;
- the first image may be an image obtained based on the second image, where the second image may be an image of at least one human object.
- the manner of acquiring the second image may include: directly acquiring the first image through an image acquisition device, where the image acquisition device may be a device having an image acquisition function such as a mobile phone, a camera, or a camera.
- the method of acquiring the second image may also include receiving the second image transmitted from another device, or reading the second image from the memory, or the second image may also be an image frame obtained by performing a frame selection operation from a video stream. This disclosure does not specifically limit this.
- Fig. 3 shows a schematic diagram of a second image according to an embodiment of the present disclosure.
- five human objects A, B, C, D, and E can be included.
- the second image may also include only one person object, or may also include another number of person objects, which is not specifically limited in the present disclosure.
- S102 Perform human target detection on the second image to obtain a detection frame of any one of the at least one human object in the second image;
- the position of the human body region for each person object in the first image can be detected to obtain the first image corresponding to the person object.
- the obtained first image may include a human body area of one human object, and at least a part of images of other human objects, such as human faces or hands of other objects, may also be included.
- a human hand and a human face that are a human object in the first image are obtained by performing subsequent processing on the first image.
- the second image may include at least one human object
- the present disclosure can perform target detection on the second image to realize the human body region detection of the human object in the second image, and obtain the detection frame of each human object.
- the detection frame corresponding to the human object in the second image can be detected by a neural network capable of performing human target detection.
- the neural network can be a convolutional neural network, which can be trained to accurately recognize the image.
- the convolutional neural network of each person object in and the location area (ie detection frame) of the corresponding person object for example, may be an R-CNN network, or may also be another neural network that can achieve target detection. There is no specific limitation.
- the detection frame corresponding to the human body area of the human object in the image is obtained, for example, the detection frame A1 of the human object A and the detection frame D1 of the human object D.
- the detection frame A1 of the human object A and the detection frame D1 of the human object D is obtained, for example, the detection frame A1 of the human object A and the detection frame D1 of the human object D.
- the above is only an exemplary description. , You can also detect the detection frame of other human objects.
- the detection frame of each person object in the image can be identified, and the detection frame that meets the quality requirements can also be identified.
- the quality value is less than the quality threshold.
- the detection frames corresponding to the character objects B, C, and D can be determined as the detection frames that do not meet the quality requirements, and the detection frame can be deleted.
- the quality value of the detection frame can be the score or confidence of the detection frame obtained at the same time when the detection frame is obtained when the target detection process is performed. When the score or confidence is greater than the quality threshold, it is determined that the detection frame satisfies Quality requirements.
- the quality threshold may be a set value, such as 80%, or may also be another value less than 1, which is not specifically limited in the present disclosure.
- S103 Determine an image area of the detection frame of any human object in the second image as the first image corresponding to any human object.
- the image area corresponding to the detection frame in the second image may be determined as the first image of the human object corresponding to the detection frame.
- the detection frame A1 of the person object A and the detection frame D1 of the person object D in the second image can be obtained.
- the image area corresponding to A1 may be determined as the first image of the person object A
- the image area corresponding to the detection frame D1 may be determined as the first image of the person object D.
- the first image obtained by the embodiment of the present disclosure removes the influence of other environmental factors, which can further improve the detection accuracy.
- the image area (first image) for a person object can be obtained from the second image.
- the first image obtained is an image for a person object, in the actual application process, because the second image includes The characters in may be similar, and the first image obtained at this time may also include at least a part of other character objects.
- the detection frame D1 in FIG. 3 may include a part of the face of the character C in addition to the character object D. It is disclosed that the positions of the faces and hands of the same person in the first image can be obtained through subsequent processing.
- Fig. 4 shows a flowchart of step S20 of a method for detecting a human face and a hand associated detection according to an embodiment of the present disclosure, wherein the performing feature extraction on the first image to obtain a first feature map of multiple scales includes :
- the scales of the obtained first image may be different.
- the obtained first image can be adjusted to the same scale, that is, to a preset scale, so that subsequent images of the same scale can be performed.
- the preset scale of the embodiment of the present disclosure may be determined according to the design and configuration of the network. For example, the preset scale of the embodiment of the present disclosure may be 256*192 (height*width), but it is not a specific limitation of the present disclosure.
- the method for adjusting the image scale may include at least one of up-sampling, down-sampling, and image interpolation, which is not specifically limited in the present disclosure, and the third image with a preset scale may also be obtained in other ways.
- S202 Input the third image to the residual network to obtain first feature maps of the multiple scales.
- feature extraction processing can be performed on the third image.
- the third image can be input to a residual network (such as Resnet50) to perform feature extraction processing of the image to obtain first images of different scales.
- Resnet50 a residual network
- the first feature maps of different scales can be output through different convolutional layers of the residual network.
- the multi-scale first feature map can also be obtained through other feature extraction networks, such as a pyramid feature extraction network, or the multi-scale first feature map can be obtained through up-sampling or down-sampling, for example,
- the sampling frequency of the embodiment of the present disclosure may be 1/8, 1/16, 1/32, etc., but the embodiment of the present disclosure does not limit this.
- the relationship between the obtained first feature maps is And Among them, C i represents each first feature map, L(C i ) represents the length of the first feature map C i , W(C i ) represents the width of the first feature map C i , and k 1 is an integer greater than or equal to 1. , I is a variable, and the range of i is [2,n], and n is the number of the first feature map. That is, the relationship between the length and the width of each first feature map in the embodiment of the present disclosure are all times of the k1 power of 2.
- the number of first feature maps obtained in the present disclosure can be 4, which can be represented as first feature maps C 1 , C 2 , C 3 and C 4 , where the length of the first feature map C 1 and may correspond to the width of a first characteristic diagram are respectively the length and width of twice the C 2, wherein the second length and width FIGS C 2 may correspond to the length and width respectively twice a C 3 as a third characteristic diagram, and twice the length and width of the length and width of the third characteristic diagram C 3 may correspond respectively to the fourth feature of the C 4 of FIG.
- the length multiples and width multiples between C 1 and C 2, between C 2 and C 3 , and between C 3 and C 4 are the same, that is, the value of k 1 is 1.
- k 1 may have different values, for example: the length and width of the first characteristic map C 1 may correspond to twice the length and width of the first characteristic map C 2, and the second wherein the length and width FIGS C 2 may correspond to the third feature, respectively length and width quadruple FIGS C 3, and a third length and width characteristics FIGS C 3 may correspond respectively to the fourth feature of the C 4 of FIG. Eight times the length and width.
- the embodiment of the present disclosure does not limit this.
- feature fusion processing of each first feature map may be further executed to improve the accuracy of the obtained feature information of the second feature map.
- performing feature fusion processing on the first feature map may be performed using a pyramid feature extraction network (FPN). That is, the first feature maps of multiple scales can be input to the feature pyramid network, and the feature fusion processing is performed through the feature pyramid network to obtain the second feature map corresponding to the first feature map.
- FPN pyramid feature extraction network
- feature fusion processing can also be performed in other ways, for example, multiple scale second feature maps can be obtained through convolution processing and up-sampling processing. Based on the above configuration, the feature accuracy of the obtained second feature map of multiple scales can be improved.
- FIG. 5 shows a flowchart of step S30 in a method for detecting the association of faces and hands according to an embodiment of the present disclosure, in which the feature fusion processing is performed on the first feature maps of the multiple scales to obtain multiple scales
- the second feature map includes:
- the first feature map obtained by the embodiment of the present disclosure can be expressed as ⁇ C 1 ,...,C n ⁇ , that is, n first feature maps, and C n can be the smallest in length and width
- the feature map is the first feature map with the smallest scale.
- the scale of the corresponding first feature map becomes smaller.
- the scales of the first feature maps C 1 , C 2 , C 3 and C 4 mentioned above are sequentially reduced.
- the second feature map F n corresponding to the first feature map C n with the smallest scale can be obtained first.
- convolution processing can be performed by matching the first characteristic first convolution FIG C n, C n FIG obtain a first feature a second feature corresponding to FIG. F n, wherein dimensions of the first feature and the second FIG C n wherein The scales of the graphs F n are the same.
- the second feature map F n is also the feature map with the smallest scale in the second feature map.
- the first convolution kernel may be a 3*3 convolution kernel, or may also be other types of convolution kernels.
- the second feature map F n performs linear interpolation processing to obtain a first and a second intermediate feature map F n corresponding to FIG feature F 'n, wherein the first intermediate feature map F' n first feature map scale The scale of C n-1 is the same;
- FIG. 1 After obtaining a second characteristic graph F n, which may be utilized to obtain a second characteristic graph F n corresponding first intermediate characteristic graph F 'n, embodiments of the present disclosure may be obtained by performing linear interpolation processing on the n second feature F in FIG. FIG second characteristic intermediate F n corresponding to a first characteristic diagram F 'n, wherein the first intermediate feature map F' n dimensions wherein the first dimension FIGS C n-1 is the same as, for example, in the C n-1 when the scale of C n is twice the scale, the first intermediate feature map F 'n is the length of the second length of twice the characteristic diagram F n, and a first intermediate characteristic diagram F' n is the width of the second feature Twice the width of the graph F n.
- each of the first feature may be obtained than the first characteristic C of FIG. FIG C n 1 ... C n-1 corresponding to the second intermediate characteristic graph C '1 ... C' n- 1
- the second convolution kernel can be used to perform convolution processing on the first feature maps C 1 ... C n-1 , respectively, to obtain a one -to-one correspondence with each first feature map C 1 ... C n-1 a second intermediate characteristic graph C '1 ... C' n- 1, wherein the second core may be a convolution kernel convolution 1 * 1, but the present disclosure which is not particularly limited.
- the scale of each second intermediate feature map obtained through the convolution processing of the second convolution kernel is the same as the scale of the corresponding first feature map, respectively.
- FIG C n-1 a first descending FIG C 1 ... C n-1 to obtain each of the first feature FIG C 1 ... C n-1 wherein a second intermediate FIG C '1 .. .C′ n-1 . That is, it is possible to obtain the first characteristic corresponds to FIG. C n-1, the second intermediate feature FIG C 'n-1, then to obtain a first characteristic corresponding to FIG C n-2 second intermediate FIG C' n-2, in on, until obtaining a first characteristic diagram corresponding to a second intermediate C 1 characterized in FIG C '1.
- first intermediate feature maps F′ 1 ... F′ n- other than the first intermediate feature map F′ n can be correspondingly obtained. 1.
- C n-1 C′ i +F′ i+1 , where the second intermediate feature map C 'i dimensions (length and width) with the first intermediate feature map F' i + 1 dimensions (length and width) are equal, and the second intermediate feature map C 'length and width of the i wherein the first length and width the same as FIG C i, the second features of FIG obtained F i length and width respectively the length and width of the first C i, wherein FIG. Wherein, i is an integer greater than or equal to 1 and less than n.
- each second feature map F i other than the second feature map F n in a reverse order processing manner. That is, the embodiment of the present disclosure can first obtain the first intermediate feature map F n-1 , where the second intermediate map C'n-1 and the first intermediate feature map F'corresponding to the first feature map C n-1 can be used. and n is added to obtain the second feature FIGS F. n-1, wherein the second intermediate feature map C 'length and width, respectively, n-1, the first intermediate feature map F' n is the same length and width, and a second FIGS F n-1 wherein the length and width of the second intermediate feature FIG C 'n-1 and F' n in length and width.
- the length and width of the second feature map F n-1 are respectively twice the length and width of the second feature map F n (the scale of C n -1 is twice the scale of C n).
- F. N-1 may be linear interpolation processing on the second feature maps, to give a first intermediate characteristic diagram F 'n-1, so that F' n-1 of the same dimension and the dimension C n-1, which in turn can be used the first characteristic graph C n-2 corresponding to the second intermediate FIG intermediate C wherein 'n-2 and the first intermediate feature map F' n-1 obtained by adding up the second processing characteristic graph F n-2, wherein the second The length and width of the graph C'n-2 are the same as those of the first intermediate characteristic graph F'n -1 , and the length and width of the second characteristic graph F n-2 are the second intermediate characteristic graph C'n -2 and F'n -1 length and width.
- FIG. 6 shows a schematic diagram of a feature extraction and feature fusion process according to an embodiment of the present disclosure.
- the feature extraction process can be performed through the residual network a, and the four convolutional layers in the residual network can be used to output four first feature maps C 1 , C 2 , C 3 and C 4 of different scales, and then use
- the feature extraction network b performs feature fusion processing to obtain a multi-scale second feature map.
- the first C 4 may be calculated through a convolution core 3 * 3 to obtain a new feature F in FIG. 4 (a second characteristic diagram), the same length and width F 4 and C 4 size.
- FIG. 1 ( FIG second feature), characterized in that the second panel F, respectively length and width of a second F 2 characterized twice FIG. After FPN, four second feature maps of different scales are also obtained, which are respectively denoted as F 1 , F 2 , F 3 and F 4 .
- the multiple of the length and width between F 1 and F 2 is the same as the multiple of the length and width between C 1 and C 2
- the multiple of the length and width between F 2 and F 3 is the same as that of C 2 and C 3 the same factor between length and width, the same length and width between F 3 and F 4 and C fold multiple of the length and width of between 3 and C 4.
- feature information of different scales can be merged to further improve feature accuracy.
- the second feature maps corresponding to the first feature maps of multiple scales can be obtained in the above manner, and the feature information of the second feature maps has improved accuracy compared with the feature information of the first feature maps.
- Fig. 7 shows a flowchart of step S40 in a method for detecting a human face and a human hand in association according to an embodiment of the present disclosure.
- detecting the associated face position and hand position of the same person object in the first image includes:
- S401 Perform convolution processing on the second feature map with the largest scale in the second feature maps of multiple scales to obtain a mask map representing the position of the face and a mask map representing the position of the human hand respectively;
- At least one second feature map of the obtained second feature maps of multiple scales may be input into the convolutional layer, and further feature fusion may be performed on the at least one second feature map, and Correspondingly, the mask map of the face position of the same person object and the mask map of the human hand position corresponding to the first image are generated.
- the present disclosure can input the second feature map into the convolutional layer to perform the associated detection of the position of the human hand and the face.
- the elements in the obtained mask map can be represented as composed of 1 and 0, where 1 represents the location area of a human hand or a human face.
- the embodiment of the present disclosure can obtain the first mask image of the face position of the same person object, the second mask image of the left hand position, and the third mask image of the right hand position, through the position of element 1 in each mask image , That is, the position of the corresponding face and hand in the first image can be obtained.
- the mask image corresponding to the undetected human hand may be an all-zero mask image.
- the output mask image can also be an all-zero mask image.
- the obtained mask image may be associated with a person object identifier and a type identifier.
- the person object identifier is used to distinguish different person objects, and different person objects may have different person object identifiers and type identifiers. It can be used to indicate the face position, left hand position, or right hand position corresponding to the mask image.
- S402 Determine the location area where the human hand and the face are associated in the first image based on the mask image of the human face position and the mask image of the human hand position.
- the position area corresponding to the associated human hand and the human face in the first image is further obtained.
- the scales of the first mask image and the second mask image obtained by the embodiment of the present disclosure can be the same as the first image, so that the face position determined according to the mask image can be mapped to the corresponding face image area in the first image, And the human hand position determined according to the mask image is mapped to the human hand image area in the first image, and then the position area where the associated human hand and face are obtained.
- the matching human face and human hand may be highlighted in the first image based on the obtained mask image, for example,
- the mask image is represented in the image area of the first image in the form of a detection frame to prompt the associated human face and human hand.
- the face detection frame D11 and the hand detection frames D12 and D13 associated with the person object D can be displayed in the image.
- the embodiment of the present disclosure can also assign the same label to the associated human face and human hand to identify that the human face and human hand are the human face and human hand of the same person object.
- the associated human face and hand position obtained in the embodiment of the present disclosure may also be used to determine the posture change of the human object.
- the first image may be obtained based on the image frames in the video stream.
- the method of the embodiment of the present disclosure may detect the change of the face position and the change of the human hand position for the same task object in the image frame.
- the face and hand associated detection method in the embodiment of the present disclosure can be applied to a neural network, such as a convolutional neural network.
- a neural network such as a convolutional neural network.
- the above-mentioned convolutional neural network can be constructed by a residual network and a pyramid network.
- the present disclosure can also train the neural network to obtain a neural network that meets the accuracy requirements.
- FIG. 8 shows a flowchart of training a neural network according to an embodiment of the present disclosure.
- the training neural network may include:
- S501 Obtain a training image, where the training image is an image including a human object, and the training image has labeling information of a real-associated face position and a human hand position;
- the training image may be an image of a person object, and at the same time, the training image may also include parts of the faces or hands of other person objects, so that the training accuracy can be improved.
- the number of training images is multiple, and the present disclosure does not limit the number of training images.
- the training image may be associated with real annotation information to supervise the training of the neural network.
- each training image has the actual associated face position and hand position label information, which is used to represent the face position and hand position (left hand and right hand) of the same person object in the training image, where the label information can indicate It is a labeling frame, or it can be expressed in the form of position coordinates, or it can be expressed as a mask map of the actual associated human hand and face position, as long as the associated face position and human hand position in the training image can be determined It can be used as an embodiment of the present disclosure.
- S502 Input the training image to the neural network, and predict the associated face position and human hand position for the same person object in the training image through the neural network;
- the training image can be input to the neural network to perform feature extraction, feature fusion, and detection of associated human hands and face positions.
- the multi-scale feature extraction of the training image can be performed through a feature extraction network such as a residual network to obtain a first predicted feature map of multiple scales.
- a feature extraction network such as a residual network to obtain a first predicted feature map of multiple scales.
- feature fusion processing can be performed on the first predicted feature maps of multiple scales.
- the pyramid network FPN is used to perform feature fusion of the multiple first predicted feature maps to obtain multiple
- the second prediction feature map of the scale in which the specific process of feature fusion will not be repeated here, and the specific process can be referred to the above-mentioned embodiment.
- convolution processing can be performed based on each second predicted feature map to obtain a prediction mask of the associated face and hand position predicted based on each second predicted feature map Figure.
- S503 Determine a network loss based on the associated face position and hand position predicted for the training image and the label information, and adjust the network parameters of the neural network according to the network loss until the training requirement is met.
- the embodiment of the present disclosure can obtain the network loss according to the difference between the face prediction mask map and the human hand prediction mask map predicted by the second prediction feature map of each scale and the corresponding mask map of the real human face and human hand.
- the network loss can be determined by the logarithmic loss function.
- the embodiment of the present disclosure can directly use the log loss function to obtain the loss between the predicted mask map obtained by the second predicted feature map of each scale and the labeled real mask map, and use the loss as the network loss , Adjust the parameters of the neural network.
- the loss corresponding to each scale can be regarded as the network loss, and the neural network parameters can be optimized separately.
- the embodiment of the present disclosure may determine the face prediction mask map obtained by the second prediction feature map of each scale through the logarithmic loss function, the mask corresponding to the human hand prediction mask map and the real label information.
- the sub-network loss between the graphs, and the weighted sum of the sub-network loss corresponding to each scale is used to determine the network loss.
- the network loss can be determined according to the weighted sum of the loss corresponding to each scale to optimize the neural network parameters together.
- the embodiment of the present disclosure can obtain network loss based on the prediction result of each second prediction feature map, the accuracy of the prediction result of the second prediction feature map of the obtained neural network will be higher regardless of the scale, thereby improving the overall The detection accuracy of the neural network.
- the network loss When the network loss is obtained, adjust the network parameters of the neural network based on the comparison result of the network loss and the loss threshold. For example, when the network loss is greater than the loss threshold, feedback and adjust the parameters of the neural network, such as adjusting the feature extraction network, pyramid The parameters of the feature network and the convolutional layer of the mask image are obtained, and the training image is reprocessed until the obtained network parameters are less than the loss threshold. And when the network loss is less than the loss threshold, it can be determined that the neural network meets the training requirements, and the training can be terminated at this time. Based on the above configuration, optimized training of the neural network can be realized to ensure the accuracy of network detection.
- the first image corresponding to the area where a human object is located can be determined from the first image, and feature extraction processing is performed on the first image to obtain the corresponding feature map, and then the feature map can be multiplied.
- Scale feature fusion processing to obtain multiple scales of second feature maps, where the second feature map has more accurate feature information than the first feature map.
- the associated information in the first image can be obtained.
- the position of human hand and human face improves the detection accuracy of human face and human hand.
- the technical solutions of the embodiments of the present disclosure do not need to obtain the key points of the human ears or the wrists, and can directly obtain the positions of the associated human hands and faces in the image, which is simple, convenient and highly accurate.
- the writing order of the steps does not mean a strict execution order but constitutes any limitation on the implementation process.
- the specific execution order of each step should be based on its function and possibility.
- the inner logic is determined.
- the present disclosure also provides a face and human hand associated detection device, electronic equipment, computer-readable storage medium, and a program, all of which can be used to implement any of the face and human hand associated detection methods provided in the present disclosure, and the corresponding technical solutions and The description and refer to the corresponding records in the method section, and will not repeat them.
- Fig. 9 shows a block diagram of a face and hand associated detection device according to an embodiment of the present disclosure. As shown in Fig. 9, the face and hand associated detection device includes:
- the obtaining module 10 is configured to obtain a first image, where the first image is an image of a person object;
- the feature extraction module 20 is configured to perform feature extraction on the first image to obtain first feature maps of multiple scales
- the fusion module 30 is configured to perform feature fusion processing on the first feature maps of multiple scales to obtain second feature maps of multiple scales, and the scale of the second feature map is the same as the scale of the first feature map.
- the detection module 40 is configured to detect the associated face position and human hand position of the same person object in the first image based on the obtained second feature maps of the multiple scales.
- the acquisition module includes:
- An acquiring unit configured to acquire the second image, where the second image is an image including at least one person object;
- a target detection unit configured to perform human target detection on the second image to obtain a detection frame of any one of the at least one human object in the first image
- the determining unit is configured to determine an image area corresponding to the detection frame of any person object in the second image as the first image of any person object.
- the feature extraction module is further configured to obtain the second image, and the second image is an image including at least one person object;
- the fusion unit is further configured to input the first feature maps of the multiple scales into a feature pyramid network, and perform the feature fusion processing through the feature pyramid network to obtain the multiple The second feature map of the scale.
- the multiple first feature maps are represented as ⁇ C 1 ,...,C n ⁇ , where n represents the number of first feature maps, and n Is an integer greater than 1;
- the fusion module is further configured to use a first convolution check C n wherein FIG convolution process to obtain a second characteristic to the first characteristic of FIG. FIG C n corresponding to F n, wherein said first feature
- the scale of the image C n is the same as the scale of the second feature image F n;
- the second feature of FIG. F n performs linear interpolation processing to obtain the first intermediate feature and the second feature FIG FIGS F n corresponding to F 'n, wherein the first intermediate feature map F' of the n-th dimension
- the scales of a feature map C n-1 are the same;
- FIG convolution processing is executed to obtain the first characteristic corresponding to FIG. I C wherein FIG second intermediate C 'i, said first FIG intermediate wherein two C 'scale i is the first intermediate feature map F' i + 1 of the same dimensions, wherein, i is greater than or equal to 1 and smaller than n integer variable;
- the fusion module is further configured to add the second intermediate feature map C'i and the corresponding first intermediate feature map F'i+1 to obtain the first intermediate feature map F'i+1.
- Two feature map F i is further configured to add the second intermediate feature map C'i and the corresponding first intermediate feature map F'i+1 to obtain the first intermediate feature map F'i+1.
- the detection module is further configured to perform convolution processing on the second feature map with the largest scale in the second feature maps of the multiple scales to obtain masks representing the positions of the faces.
- the location area where the human hand and the human face are associated in the first image is determined.
- the scale relationship between the first feature maps of the multiple scales is: And Wherein, C i represents each first feature map, L(C i ) represents the length of the first feature map C i , W(C i ) represents the width of the first feature map C i , and k 1 is greater than or An integer equal to 1, i is a variable, and the range of i is [2, n], and n represents the number of first feature maps.
- the device further includes at least one of a display module and a distribution module, wherein
- the display module is configured to highlight the associated human hand and human face in the first image
- the allocation module is configured to allocate the same label to the associated face position and human hand position detected in the first image.
- the device includes a neural network, the feature extraction module, the fusion module, and the detection module apply the neural network,
- the device also includes a training module for training the neural network, wherein the step of training the neural network includes:
- the training image is an image including a human object, and the training image has labeling information of a real-associated face position and a human hand position;
- the network loss is determined based on the predicted associated face position and hand position and the label information, and the network parameters of the neural network are adjusted according to the network loss until the training requirement is met.
- the functions or modules contained in the device provided in the embodiments of the present disclosure can be used to execute the methods described in the above method embodiments.
- the functions or modules contained in the device provided in the embodiments of the present disclosure can be used to execute the methods described in the above method embodiments.
- the embodiments of the present disclosure also provide a computer-readable storage medium on which computer program instructions are stored, and the computer program instructions implement the above-mentioned method when executed by a processor.
- the computer-readable storage medium may be a non-volatile computer-readable storage medium.
- An embodiment of the present disclosure also provides an electronic device, including: a processor; a memory for storing executable instructions of the processor; wherein the processor is configured as the above method.
- the electronic device can be provided as a terminal, server or other form of device.
- Fig. 10 shows a block diagram of an electronic device according to an embodiment of the present disclosure.
- the electronic device 800 may be a mobile phone, a computer, a digital broadcasting terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, and other terminals.
- the electronic device 800 may include one or more of the following components: a processing component 802, a memory 804, a power component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, and a sensor component 814 , And communication component 816.
- the processing component 802 generally controls the overall operations of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations.
- the processing component 802 may include one or more processors 820 to execute instructions to complete all or part of the steps of the foregoing method.
- the processing component 802 may include one or more modules to facilitate the interaction between the processing component 802 and other components.
- the processing component 802 may include a multimedia module to facilitate the interaction between the multimedia component 808 and the processing component 802.
- the memory 804 is configured to store various types of data to support operations in the electronic device 800. Examples of these data include instructions for any application or method operating on the electronic device 800, contact data, phone book data, messages, pictures, videos, etc.
- the memory 804 can be implemented by any type of volatile or non-volatile storage device or a combination thereof, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable and Programmable read only memory (EPROM), programmable read only memory (PROM), read only memory (ROM), magnetic memory, flash memory, magnetic disk or optical disk.
- SRAM static random access memory
- EEPROM electrically erasable programmable read-only memory
- EPROM erasable and Programmable read only memory
- PROM programmable read only memory
- ROM read only memory
- magnetic memory flash memory
- flash memory magnetic disk or optical disk.
- the power supply component 806 provides power for various components of the electronic device 800.
- the power supply component 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 800.
- the multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and the user.
- the screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from the user.
- the touch panel includes one or more touch sensors to sense touch, sliding, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure related to the touch or slide operation.
- the multimedia component 808 includes a front camera and/or a rear camera. When the electronic device 800 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera can receive external multimedia data. Each front camera and rear camera can be a fixed optical lens system or have focal length and optical zoom capabilities.
- the audio component 810 is configured to output and/or input audio signals.
- the audio component 810 includes a microphone (MIC), and when the electronic device 800 is in an operation mode, such as a call mode, a recording mode, and a voice recognition mode, the microphone is configured to receive an external audio signal.
- the received audio signal may be further stored in the memory 804 or transmitted via the communication component 816.
- the audio component 810 further includes a speaker for outputting audio signals.
- the I/O interface 812 provides an interface between the processing component 802 and a peripheral interface module.
- the above-mentioned peripheral interface module may be a keyboard, a click wheel, a button, and the like. These buttons may include but are not limited to: home button, volume button, start button, and lock button.
- the sensor component 814 includes one or more sensors for providing the electronic device 800 with various aspects of state evaluation.
- the sensor component 814 can detect the on/off status of the electronic device 800 and the relative positioning of the components.
- the component is the display and the keypad of the electronic device 800.
- the sensor component 814 can also detect the electronic device 800 or the electronic device 800.
- the position of the component changes, the presence or absence of contact between the user and the electronic device 800, the orientation or acceleration/deceleration of the electronic device 800, and the temperature change of the electronic device 800.
- the sensor component 814 may include a proximity sensor configured to detect the presence of nearby objects when there is no physical contact.
- the sensor component 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications.
- the sensor component 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
- the communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices.
- the electronic device 800 can access a wireless network based on a communication standard, such as WiFi, 2G, or 3G, or a combination thereof.
- the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel.
- the communication component 816 further includes a near field communication (NFC) module to facilitate short-range communication.
- the NFC module can be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology and other technologies.
- RFID radio frequency identification
- IrDA infrared data association
- UWB ultra-wideband
- Bluetooth Bluetooth
- the electronic device 800 may be implemented by one or more application-specific integrated circuits (ASIC), digital signal processors (DSP), digital signal processing devices (DSPD), programmable logic devices (PLD), field-available A programmable gate array (FPGA), controller, microcontroller, microprocessor, or other electronic components are implemented to implement the above methods.
- ASIC application-specific integrated circuits
- DSP digital signal processors
- DSPD digital signal processing devices
- PLD programmable logic devices
- FPGA field-available A programmable gate array
- controller microcontroller, microprocessor, or other electronic components are implemented to implement the above methods.
- a non-volatile computer-readable storage medium such as a memory 804 including computer program instructions, which can be executed by the processor 820 of the electronic device 800 to complete the foregoing method.
- Fig. 11 shows a block diagram of another electronic device according to an embodiment of the present disclosure.
- the electronic device 1900 may be provided as a server. 11, the electronic device 1900 includes a processing component 1922, which further includes one or more processors, and a memory resource represented by a memory 1932, for storing instructions executable by the processing component 1922, such as application programs.
- the application program stored in the memory 1932 may include one or more modules each corresponding to a set of instructions.
- the processing component 1922 is configured to execute instructions to perform the above-described methods.
- the electronic device 1900 may also include a power supply component 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to the network, and an input output (I/O) interface 1958 .
- the electronic device 1900 can operate based on an operating system stored in the memory 1932, such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM or the like.
- a non-volatile computer-readable storage medium is also provided, such as the memory 1932 including computer program instructions, which can be executed by the processing component 1922 of the electronic device 1900 to complete the foregoing method.
- the present disclosure may be a system, method and/or computer program product.
- the computer program product may include a computer-readable storage medium loaded with computer-readable program instructions for enabling a processor to implement various aspects of the present disclosure.
- the computer-readable storage medium may be a tangible device that can hold and store instructions used by the instruction execution device.
- the computer-readable storage medium may be, for example, but not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
- Non-exhaustive list of computer-readable storage media include: portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM) Or flash memory), static random access memory (SRAM), portable compact disk read-only memory (CD-ROM), digital versatile disk (DVD), memory stick, floppy disk, mechanical encoding device, such as a printer with instructions stored thereon
- RAM random access memory
- ROM read-only memory
- EPROM erasable programmable read-only memory
- flash memory flash memory
- SRAM static random access memory
- CD-ROM compact disk read-only memory
- DVD digital versatile disk
- memory stick floppy disk
- mechanical encoding device such as a printer with instructions stored thereon
- the computer-readable storage medium used here is not interpreted as the instantaneous signal itself, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (for example, light pulses through fiber optic cables), or through wires Transmission of electrical signals.
- the computer-readable program instructions described herein can be downloaded from a computer-readable storage medium to various computing/processing devices, or downloaded to an external computer or external storage device via a network, such as the Internet, a local area network, a wide area network, and/or a wireless network.
- the network may include copper transmission cables, optical fiber transmission, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers.
- the network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network, and forwards the computer-readable program instructions for storage in the computer-readable storage medium in each computing/processing device .
- the computer program instructions used to perform the operations of the present disclosure may be assembly instructions, instruction set architecture (ISA) instructions, machine instructions, machine-related instructions, microcode, firmware instructions, state setting data, or in one or more programming languages.
- Source code or object code written in any combination, the programming language includes object-oriented programming languages such as Smalltalk, C++, etc., and conventional procedural programming languages such as "C" language or similar programming languages.
- Computer-readable program instructions can be executed entirely on the user's computer, partly on the user's computer, executed as a stand-alone software package, partly on the user's computer and partly executed on a remote computer, or entirely on the remote computer or server carried out.
- the remote computer can be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (for example, using an Internet service provider to access the Internet). connection).
- LAN local area network
- WAN wide area network
- an electronic circuit such as a programmable logic circuit, a field programmable gate array (FPGA), or a programmable logic array (PLA), can be customized by using the status information of the computer-readable program instructions.
- the computer-readable program instructions are executed to realize various aspects of the present disclosure.
- These computer-readable program instructions can be provided to the processor of a general-purpose computer, a special-purpose computer, or other programmable data processing device, thereby producing a machine that makes these instructions when executed by the processor of the computer or other programmable data processing device , A device that implements the functions/actions specified in one or more blocks in the flowcharts and/or block diagrams is produced. It is also possible to store these computer-readable program instructions in a computer-readable storage medium. These instructions make computers, programmable data processing apparatuses, and/or other devices work in a specific manner. Thus, the computer-readable medium storing the instructions includes An article of manufacture, which includes instructions for implementing various aspects of the functions/actions specified in one or more blocks in the flowcharts and/or block diagrams.
- each block in the flowchart or block diagram may represent a module, program segment, or part of an instruction, and the module, program segment, or part of an instruction contains one or more components for realizing the specified logical function.
- Executable instructions may also occur in a different order than the order marked in the drawings. For example, two consecutive blocks can actually be executed substantially in parallel, or they can sometimes be executed in the reverse order, depending on the functions involved.
- each block in the block diagram and/or flowchart, and the combination of the blocks in the block diagram and/or flowchart can be implemented by a dedicated hardware-based system that performs the specified functions or actions Or it can be realized by a combination of dedicated hardware and computer instructions.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Human Computer Interaction (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Molecular Biology (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Mathematical Physics (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Image Analysis (AREA)
- Indexing, Searching, Synchronizing, And The Amount Of Synchronization Travel Of Record Carriers (AREA)
- Electrophonic Musical Instruments (AREA)
- Signal Processing For Digital Recording And Reproducing (AREA)
Abstract
Description
Claims (21)
- 一种人脸和人手关联检测方法,其特征在于,包括:获取第一图像,所述第一图像为人物对象的图像;对所述第一图像执行特征提取,得到多个尺度的第一特征图;对所述多个尺度的第一特征图执行特征融合处理,得到多个尺度的第二特征图,所述第二特征图的尺度与所述第一特征图的尺度一一对应;基于得到的所述多个尺度的第二特征图检测所述第一图像中针对同一人物对象的关联的人脸位置和人手位置。
- 根据权利要求1所述的方法,其特征在于,所述获取第一图像,包括:获取第二图像,所述第二图像为包括至少一个人物对象的图像;对所述第二图像执行人体目标检测,得到所述第一图像中所述至少一个人物对象中任一人物对象的检测框;将所述任一人物对象的所述检测框在所述第二图像中对应的图像区域,确定为所述任一人物对象的第一图像。
- 根据权利要求1或2所述的方法,其特征在于,所述对所述第一图像执行特征提取,得到多个尺度的第一特征图,包括:将所述第一图像调整为预设尺度的第三图像;将所述第三图像输入至残差网络,得到所述多个尺度的第一特征图。
- 根据权利要求1-3中任意一项所述的方法,其特征在于,所述对所述多个尺度的第一特征图执行特征融合处理,得到多个尺度的第二特征图,包括:将所述多个尺度的第一特征图输入至特征金字塔网络,通过所述特征金字塔网络执行所述特征融合处理,得到所述多个尺度的第二特征图。
- 根据权利要求1-4中任意一项所述的方法,其特征在于,按照尺度从小到大的顺序,所述多个第一特征图表示为{C 1,...,C n},其中,n表示第一特征图的数量,n为大于1的整数;所述对所述多个尺度的第一特征图执行特征融合处理,得到多个尺度的第二特征图,包括:利用第一卷积核对第一特征图C n执行卷积处理,获得与所述第一特征图C n对应的第二特征图F n,其中,所述第一特征图C n的尺度与所述第二特征图F n的尺度相同;对所述第二特征图F n执行线性插值处理获得与所述第二特征图F n对应的第一中间特征图F′ n,其中,所述第一中间特征图F′ n的尺度与第一特征图C n-1的尺度相同;利用第二卷积核对所述第一特征图C n以外的第一特征图C i执行卷积处理,得到所述第一特征图C i对应的第二中间特征图C′ i,所述第二中间特征图C′ i的尺度与第一中间特征图F′ i+1的尺度相同,其中,i为大于或者等于1且小于n的整数变量;利用所述第二中间特征图C′ i和对应的所述第一中间特征图F′ i+1得到所述第二特征图F n以外的第二特征图F i,其中,所述第一中间特征图F′ i+1由对应的所述第二特征图F i+1经线性插值得到。
- 根据权利要求5所述的方法,其特征在于,所述利用所述第二中间特征图C′ i和对应的所 述第一中间特征图F′ i+1得到所述第二特征图F n以外的第二特征图F i,包括:将所述第二中间特征图C′ i与对应的所述第一中间特征图F′ i+1进行加和处理,得到所述第二特征图F i。
- 根据权利要求1-6中任意一项所述的方法,其特征在于,所述基于得到的所述多个尺度的第二特征图检测所述第一图像中针对同一人物对象的关联的人脸位置和人手位置,包括:对所述多个尺度的第二特征图中尺度最大的第二特征图执行卷积处理,分别得到表示所述人脸位置的掩码图,以及表示所述人手位置的掩码图;基于所述人脸位置的掩码图以及所述人手位置的掩码图确定所述第一图像中关联的人手和人脸所在的位置区域。
- 根据权利要求1-7中任意一项所述的方法,其特征在于,所述方法还包括以下方式中的至少一种:在所述第一图像中突出显示所述关联的人手和人脸;为所述第一图像中检测到的关联的人脸位置和人手位置分配相同的标签。
- 根据权利要求1-8中任意一项所述的方法,其特征在于,所述方法通过神经网络实现,其中,训练所述神经网络的步骤包括:获取训练图像,所述训练图像为包括人物对象的图像,所述训练图像具有真实关联的人脸位置和人手位置的标注信息;将所述训练图像输入至所述神经网络,通过所述神经网络预测所述训练图像中针对同一人物对象的关联的人脸位置和人手位置;基于预测出的关联的所述人脸位置以及人手位置以及所述标注信息确定网络损失,并根据所述网络损失调整所述神经网络的网络参数,直至满足训练要求。
- 一种人脸和人手关联检测装置,其特征在于,包括:获取模块,用于获取第一图像,所述第一图像为人物对象的图像;特征提取模块,用于对所述第一图像执行特征提取,得到多个尺度的第一特征图;融合模块,用于对所述多个尺度的第一特征图执行特征融合处理,得到多个尺度的第二特征图,所述第二特征图的尺度与所述第一特征图的尺度一一对应;检测模块,用于基于得到的所述多个尺度的第二特征图检测所述第一图像中针对同一人物对象的关联的人脸位置和人手位置。
- 根据权利要求10所述的装置,其特征在于,所述获取模块包括:获取单元,用于获取第二图像,所述第二图像为包括至少一个人物对象的图像;目标检测单元,用于对所述第二图像执行人体目标检测,得到所述第一图像中所述至少一个人物对象中任一人物对象的检测框;确定单元,用于将所述任一人物对象的所述检测框在所述第二图像中对应的图像区域,确定为所述任一人物对象的第一图像。
- 根据权利要求10或11所述的装置,其特征在于,所述特征提取模块还用于将所述第一图像调整为预设尺度的第三图像;将所述第三图像输入至残差网络,得到所述多个尺度的第一特征图。
- 根据权利要求10-12中任意一项所述的装置,其特征在于,所述融合单元还用于将所述多个尺度的第一特征图输入至特征金字塔网络,通过所述特征金字塔网络执行所述特征融合处理,得到所述多个尺度的第二特征图。
- 根据权利要求10-13中任意一项所述的装置,其特征在于,按照尺度从小到大的顺序,所述多 个第一特征图表示为{C 1,...,C n},其中,n表示第一特征图的数量,n为大于1的整数;所述融合模块还用于利用第一卷积核对第一特征图C n执行卷积处理,获得与所述第一特征图C n对应的第二特征图F n,其中,所述第一特征图C n的尺度与所述第二特征图F n的尺度相同;对所述第二特征图F n执行线性插值处理获得与所述第二特征图F n对应的第一中间特征图F′ n,其中,所述第一中间特征图F′ n的尺度与第一特征图C n-1的尺度相同;利用第二卷积核对所述第一特征图C n以外的第一特征图C i执行卷积处理,得到所述第一特征图C i对应的第二中间特征图C′ i,所述第二中间特征图C′ i的尺度与第一中间特征图F′ i+1的尺度相同,其中,i为大于或者等于1且小于n的整数变量;利用所述第二中间特征图C′ i和对应的所述第一中间特征图F′ i+1得到所述第二特征图F n以外的第二特征图F i,其中,所述第一中间特征图F′ i+1由对应的所述第二特征图F i+1经线性插值得到。
- 根据权利要求14所述的装置,其特征在于,所述融合模块还用于将所述第二中间特征图C′ i与对应的所述第一中间特征图F′ i+1进行加和处理,得到所述第二特征图F i。
- 根据权利要求10-15中任意一项所述的装置,其特征在于,所述检测模块还用于对所述多个尺度的第二特征图中尺度最大的第二特征图执行卷积处理,分别得到表示所述人脸位置的掩码图,以及表示所述人手位置的掩码图;基于所述人脸位置的掩码图以及所述人手位置的掩码图确定所述第一图像中关联的人手和人脸所在的位置区域。
- 根据权利要求10-16中任意一项所述的装置,其特征在于,所述装置还包括显示模块和分配模块中的至少一种,其中所述显示模块,用于在所述第一图像中突出显示所述关联的人手和人脸;所述分配模块,用于为所述第一图像中检测到的关联的人脸位置和人手位置分配相同的标签。
- 根据权利要求10-17中任意一项所述的装置,其特征在于,所述装置包括神经网络,所述特征提取模块、所述融合模块和所述检测模块应用所述神经网络,所述装置还包括训练模块,用于训练所述神经网络,其中,训练所述神经网络的步骤包括:获取训练图像,所述训练图像为包括人物对象的图像,所述训练图像具有真实关联的人脸位置和人手位置的标注信息;将所述训练图像输入至所述神经网络,通过所述神经网络预测所述训练图像中针对同一人物对象的关联的人脸位置和人手位置;基于预测出的关联的所述人脸位置以及人手位置以及所述标注信息确定网络损失,并根据所述网络损失调整所述神经网络的网络参数,直至满足训练要求。
- 一种电子设备,其特征在于,包括:处理器;用于存储处理器可执行指令的存储器;其中,所述处理器被配置为调用所述存储器存储的指令,以执行权利要求1至9中任意一项所述的方法。
- 一种计算机可读存储介质,其上存储有计算机程序指令,其特征在于,所述计算机程序指令被处理器执行时实现权利要求1至9中任意一项所述的方法。
- 一种计算机程序,其特征在于,所述计算机程序包括计算机可读代码,当所述计算机可读代码在电子设备中运行时,所述电子设备中的处理器执行用于实现权利要求1至9中的任意一项所述的方法。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020217021540A KR102632647B1 (ko) | 2019-09-18 | 2019-11-26 | 얼굴과 손을 관련지어 검출하는 방법 및 장치, 전자기기 및 기억매체 |
JP2021538256A JP7238141B2 (ja) | 2019-09-18 | 2019-11-26 | 顔と手を関連付けて検出する方法及び装置、電子機器、記憶媒体及びコンピュータプログラム |
SG11202106831QA SG11202106831QA (en) | 2019-09-18 | 2019-11-26 | Method and apparatus for association detection for human face and human hand, electronic device, and storage medium |
US17/362,037 US20210326587A1 (en) | 2019-09-18 | 2021-06-29 | Human face and hand association detecting method and a device, and storage medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910882139.6A CN110647834B (zh) | 2019-09-18 | 2019-09-18 | 人脸和人手关联检测方法及装置、电子设备和存储介质 |
CN201910882139.6 | 2019-09-18 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/362,037 Continuation US20210326587A1 (en) | 2019-09-18 | 2021-06-29 | Human face and hand association detecting method and a device, and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021051650A1 true WO2021051650A1 (zh) | 2021-03-25 |
Family
ID=69010775
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/120901 WO2021051650A1 (zh) | 2019-09-18 | 2019-11-26 | 人脸和人手关联检测方法及装置、电子设备和存储介质 |
Country Status (7)
Country | Link |
---|---|
US (1) | US20210326587A1 (zh) |
JP (1) | JP7238141B2 (zh) |
KR (1) | KR102632647B1 (zh) |
CN (1) | CN110647834B (zh) |
SG (1) | SG11202106831QA (zh) |
TW (1) | TWI781359B (zh) |
WO (1) | WO2021051650A1 (zh) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113591967A (zh) * | 2021-07-27 | 2021-11-02 | 南京旭锐软件科技有限公司 | 一种图像处理方法、装置、设备及计算机存储介质 |
CN113723322A (zh) * | 2021-09-02 | 2021-11-30 | 南京理工大学 | 一种基于单阶段无锚点框架的行人检测方法及*** |
CN113936256A (zh) * | 2021-10-15 | 2022-01-14 | 北京百度网讯科技有限公司 | 一种图像目标检测方法、装置、设备以及存储介质 |
CN114005178A (zh) * | 2021-10-29 | 2022-02-01 | 北京百度网讯科技有限公司 | 人物交互检测方法、神经网络及其训练方法、设备和介质 |
Families Citing this family (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108229455B (zh) * | 2017-02-23 | 2020-10-16 | 北京市商汤科技开发有限公司 | 物体检测方法、神经网络的训练方法、装置和电子设备 |
WO2021146890A1 (en) * | 2020-01-21 | 2021-07-29 | Beijing Didi Infinity Technology And Development Co., Ltd. | Systems and methods for object detection in image using detection model |
CN111507408B (zh) * | 2020-04-17 | 2022-11-04 | 深圳市商汤科技有限公司 | 图像处理方法及装置、电子设备和存储介质 |
CN111709415B (zh) * | 2020-04-29 | 2023-10-27 | 北京迈格威科技有限公司 | 目标检测方法、装置、计算机设备和存储介质 |
CN111783621B (zh) * | 2020-06-29 | 2024-01-23 | 北京百度网讯科技有限公司 | 人脸表情识别及模型训练的方法、装置、设备及存储介质 |
WO2022144601A1 (en) * | 2020-12-29 | 2022-07-07 | Sensetime International Pte. Ltd. | Method and apparatus for detecting associated objects |
AU2021203821B2 (en) * | 2020-12-31 | 2022-08-18 | Sensetime International Pte. Ltd. | Methods, devices, apparatuses and storage media of detecting correlated objects involved in images |
CN112528977B (zh) * | 2021-02-10 | 2021-07-02 | 北京优幕科技有限责任公司 | 目标检测方法、装置、电子设备和存储介质 |
WO2022195338A1 (en) * | 2021-03-17 | 2022-09-22 | Sensetime International Pte. Ltd. | Methods, apparatuses, devices and storage media for detecting correlated objects involved in image |
WO2022195336A1 (en) * | 2021-03-17 | 2022-09-22 | Sensetime International Pte. Ltd. | Methods, apparatuses, devices and storage medium for predicting correlation between objects |
CN113557546B (zh) * | 2021-03-17 | 2024-04-09 | 商汤国际私人有限公司 | 图像中关联对象的检测方法、装置、设备和存储介质 |
AU2021204583A1 (en) * | 2021-03-17 | 2022-10-06 | Sensetime International Pte. Ltd. | Methods, apparatuses, devices and storage medium for predicting correlation between objects |
CN113031464B (zh) * | 2021-03-22 | 2022-11-22 | 北京市商汤科技开发有限公司 | 设备控制方法、装置、电子设备及存储介质 |
CN112766244B (zh) * | 2021-04-07 | 2021-06-08 | 腾讯科技(深圳)有限公司 | 目标对象检测方法、装置、计算机设备和存储介质 |
WO2022096957A1 (en) * | 2021-06-22 | 2022-05-12 | Sensetime International Pte. Ltd. | Body and hand association method and apparatus, device, and storage medium |
JP2023504319A (ja) | 2021-06-22 | 2023-02-03 | センスタイム インターナショナル ピーティーイー.リミテッド | 人体と人手を関連付ける方法、装置、機器及び記憶媒体 |
CN113591567A (zh) * | 2021-06-28 | 2021-11-02 | 北京百度网讯科技有限公司 | 目标检测方法、目标检测模型的训练方法及其装置 |
CN113642505B (zh) * | 2021-08-25 | 2023-04-18 | 四川大学 | 一种基于特征金字塔的人脸表情识别方法及装置 |
JP7446338B2 (ja) | 2021-09-16 | 2024-03-08 | センスタイム インターナショナル プライベート リミテッド | 顔と手との関連度の検出方法、装置、機器及び記憶媒体 |
WO2023041969A1 (en) * | 2021-09-16 | 2023-03-23 | Sensetime International Pte. Ltd. | Face-hand correlation degree detection method and apparatus, device and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108460362A (zh) * | 2018-03-23 | 2018-08-28 | 成都品果科技有限公司 | 一种检测人体部位的***及方法 |
CN109614876A (zh) * | 2018-11-16 | 2019-04-12 | 北京市商汤科技开发有限公司 | 关键点检测方法及装置、电子设备和存储介质 |
EP3493105A1 (en) * | 2017-12-03 | 2019-06-05 | Facebook, Inc. | Optimizations for dynamic object instance detection, segmentation, and structure mapping |
CN109886066A (zh) * | 2018-12-17 | 2019-06-14 | 南京理工大学 | 基于多尺度和多层特征融合的快速目标检测方法 |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2012053606A (ja) * | 2010-08-31 | 2012-03-15 | Sony Corp | 情報処理装置および方法、並びにプログラム |
CN109145911A (zh) * | 2017-11-15 | 2019-01-04 | 中国石油大学(华东) | 一种街拍照片目标人物提取方法 |
US10692243B2 (en) * | 2017-12-03 | 2020-06-23 | Facebook, Inc. | Optimizations for dynamic object instance detection, segmentation, and structure mapping |
CN108764164B (zh) * | 2018-05-30 | 2020-12-08 | 华中科技大学 | 一种基于可变形卷积网络的人脸检测方法及*** |
CN109325450A (zh) * | 2018-09-25 | 2019-02-12 | Oppo广东移动通信有限公司 | 图像处理方法、装置、存储介质及电子设备 |
CN109508681B (zh) * | 2018-11-20 | 2021-11-30 | 北京京东尚科信息技术有限公司 | 生成人体关键点检测模型的方法和装置 |
CN109711273B (zh) * | 2018-12-04 | 2020-01-17 | 北京字节跳动网络技术有限公司 | 图像关键点提取方法、装置、可读存储介质及电子设备 |
CN109858402B (zh) * | 2019-01-16 | 2021-08-31 | 腾讯科技(深圳)有限公司 | 一种图像检测方法、装置、终端以及存储介质 |
-
2019
- 2019-09-18 CN CN201910882139.6A patent/CN110647834B/zh active Active
- 2019-11-26 SG SG11202106831QA patent/SG11202106831QA/en unknown
- 2019-11-26 KR KR1020217021540A patent/KR102632647B1/ko active IP Right Grant
- 2019-11-26 JP JP2021538256A patent/JP7238141B2/ja active Active
- 2019-11-26 WO PCT/CN2019/120901 patent/WO2021051650A1/zh active Application Filing
- 2019-12-17 TW TW108146192A patent/TWI781359B/zh active
-
2021
- 2021-06-29 US US17/362,037 patent/US20210326587A1/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3493105A1 (en) * | 2017-12-03 | 2019-06-05 | Facebook, Inc. | Optimizations for dynamic object instance detection, segmentation, and structure mapping |
CN108460362A (zh) * | 2018-03-23 | 2018-08-28 | 成都品果科技有限公司 | 一种检测人体部位的***及方法 |
CN109614876A (zh) * | 2018-11-16 | 2019-04-12 | 北京市商汤科技开发有限公司 | 关键点检测方法及装置、电子设备和存储介质 |
CN109886066A (zh) * | 2018-12-17 | 2019-06-14 | 南京理工大学 | 基于多尺度和多层特征融合的快速目标检测方法 |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113591967A (zh) * | 2021-07-27 | 2021-11-02 | 南京旭锐软件科技有限公司 | 一种图像处理方法、装置、设备及计算机存储介质 |
CN113591967B (zh) * | 2021-07-27 | 2024-06-11 | 南京旭锐软件科技有限公司 | 一种图像处理方法、装置、设备及计算机存储介质 |
CN113723322A (zh) * | 2021-09-02 | 2021-11-30 | 南京理工大学 | 一种基于单阶段无锚点框架的行人检测方法及*** |
CN113936256A (zh) * | 2021-10-15 | 2022-01-14 | 北京百度网讯科技有限公司 | 一种图像目标检测方法、装置、设备以及存储介质 |
CN114005178A (zh) * | 2021-10-29 | 2022-02-01 | 北京百度网讯科技有限公司 | 人物交互检测方法、神经网络及其训练方法、设备和介质 |
CN114005178B (zh) * | 2021-10-29 | 2023-09-01 | 北京百度网讯科技有限公司 | 人物交互检测方法、神经网络及其训练方法、设备和介质 |
Also Published As
Publication number | Publication date |
---|---|
TWI781359B (zh) | 2022-10-21 |
KR20210113612A (ko) | 2021-09-16 |
SG11202106831QA (en) | 2021-07-29 |
TW202113680A (zh) | 2021-04-01 |
JP2022517914A (ja) | 2022-03-11 |
CN110647834A (zh) | 2020-01-03 |
US20210326587A1 (en) | 2021-10-21 |
CN110647834B (zh) | 2021-06-25 |
KR102632647B1 (ko) | 2024-02-01 |
JP7238141B2 (ja) | 2023-03-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021051650A1 (zh) | 人脸和人手关联检测方法及装置、电子设备和存储介质 | |
TWI724736B (zh) | 圖像處理方法及裝置、電子設備、儲存媒體和電腦程式 | |
CN110287874B (zh) | 目标追踪方法及装置、电子设备和存储介质 | |
CN111310616B (zh) | 图像处理方法及装置、电子设备和存储介质 | |
TWI747325B (zh) | 目標對象匹配方法及目標對象匹配裝置、電子設備和電腦可讀儲存媒介 | |
US11086482B2 (en) | Method and device for displaying history pages in application program and computer-readable medium | |
WO2020134866A1 (zh) | 关键点检测方法及装置、电子设备和存储介质 | |
CN110781813B (zh) | 图像识别方法及装置、电子设备和存储介质 | |
WO2019205605A1 (zh) | 人脸特征点的定位方法及装置 | |
CN109522937B (zh) | 图像处理方法及装置、电子设备和存储介质 | |
JP2022533065A (ja) | 文字認識方法及び装置、電子機器並びに記憶媒体 | |
CN111242303A (zh) | 网络训练方法及装置、图像处理方法及装置 | |
CN113486830A (zh) | 图像处理方法及装置、电子设备和存储介质 | |
CN109344703B (zh) | 对象检测方法及装置、电子设备和存储介质 | |
CN109447258B (zh) | 神经网络模型的优化方法及装置、电子设备和存储介质 | |
CN108062168B (zh) | 一种候选词上屏方法、装置和用于候选词上屏的装置 | |
CN112381223A (zh) | 神经网络训练与图像处理方法及装置 | |
CN110019928B (zh) | 视频标题的优化方法及装置 | |
CN117893591B (zh) | 光幕模板识别方法及装置、设备、存储介质和程序产品 | |
KR20150027502A (ko) | 이미지 촬영 방법 및 그 전자 장치 | |
CN114594862A (zh) | 一种推荐方法、装置和电子设备 | |
CN111753596A (zh) | 神经网络的训练方法及装置、电子设备和存储介质 | |
CN112446265A (zh) | 一种输入方法及装置 | |
CN111694769A (zh) | 数据读取方法及装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19945573 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2021538256 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 28.07.2022) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19945573 Country of ref document: EP Kind code of ref document: A1 |