CN110866508A - Method, device, terminal and storage medium for recognizing form of target object - Google Patents

Method, device, terminal and storage medium for recognizing form of target object Download PDF

Info

Publication number
CN110866508A
CN110866508A CN201911141195.0A CN201911141195A CN110866508A CN 110866508 A CN110866508 A CN 110866508A CN 201911141195 A CN201911141195 A CN 201911141195A CN 110866508 A CN110866508 A CN 110866508A
Authority
CN
China
Prior art keywords
feature
image
target
target object
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911141195.0A
Other languages
Chinese (zh)
Other versions
CN110866508B (en
Inventor
颜波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201911141195.0A priority Critical patent/CN110866508B/en
Publication of CN110866508A publication Critical patent/CN110866508A/en
Application granted granted Critical
Publication of CN110866508B publication Critical patent/CN110866508B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Geometry (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Ophthalmology & Optometry (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application discloses a method, a device, a terminal and a storage medium for identifying the form of a target object, which belong to the field of image processing. The target object aimed by the application is provided with the corresponding symmetrical object, and the symmetrical relation between the target object and the symmetrical object can be mirror symmetry or central symmetry. Therefore, the form of the target object can be determined through one detection process no matter whether the target object is one object or two objects in a symmetrical relation, the efficiency of determining the form of the target object is improved, and the probability of missed detection and false detection of the target object is reduced through comparison of features.

Description

Method, device, terminal and storage medium for recognizing form of target object
Technical Field
The present disclosure relates to the field of image processing, and in particular, to a method, an apparatus, a terminal, and a storage medium for recognizing a form of a target object.
Background
As image recognition technology has been developed, technology for recognizing whether the human eyes are open has been increasingly developed.
In some technical application scenarios, a person skilled in the art will perform face keypoint localization on an image. After the key points of the human face in the image are positioned, the human eye area is positioned through the key points and whether the human eyes are open or not is further determined.
Disclosure of Invention
The embodiment of the application provides a method, a device, a terminal and a storage medium for identifying the form of a target object. The technical scheme is as follows:
according to an aspect of the present application, there is provided a method of recognizing a morphology of a target object, the target object having a corresponding symmetric object in a physical world, the target object and the symmetric object being centrosymmetric or mirror-symmetric, the method comprising:
acquiring a target image of the target object;
extracting a first feature of the target image, wherein the first feature is used for indicating the target image;
acquiring a second characteristic according to the first characteristic, wherein the second characteristic is a turning characteristic corresponding to the first characteristic;
fusing the first feature and the second feature to obtain a fused feature;
determining a resulting morphology of the target object according to the fusion features, the resulting morphology being a normal morphology or an abnormal morphology.
According to another aspect of the present application, there is provided an apparatus for recognizing a form of a target object, the target object having a corresponding symmetric object in a physical world, the target object and the symmetric object being centrosymmetric or mirror-symmetric, the apparatus comprising:
the image acquisition module is used for acquiring a target image of the target object;
a first feature extraction module, configured to extract a first feature of the target image, where the first feature is used to indicate the target image;
the second feature extraction module is used for acquiring a second feature according to the first feature, wherein the second feature is a turning feature corresponding to the first feature;
the feature fusion module is used for fusing the first feature and the second feature to obtain a fused feature;
and the form determining module is used for determining the result form of the target object according to the fusion characteristics, wherein the result form is a normal form or an abnormal form.
According to another aspect of the present application, there is provided a terminal comprising a processor and a memory, the memory having stored therein at least one instruction, the instruction being loaded and executed by the processor to implement the method of identifying a morphology of a target object as provided in the implementations of the present application.
According to another aspect of the present application, there is provided a computer-readable storage medium having stored therein at least one instruction, which is loaded and executed by a processor, to implement a method of identifying a morphology of a target object as provided in the implementations of the present application.
The beneficial effects brought by the technical scheme provided by the embodiment of the application can include:
according to the method and the device, the target image of the target object can be acquired, the first feature in the target image is extracted, the corresponding second feature is acquired according to the first feature, the first feature and the second feature are fused together to form a fusion feature, the result form of the target object is determined according to the fusion feature, and the result form is a normal form or an abnormal form. The target object aimed by the application is provided with the corresponding symmetrical object, and the symmetrical relation between the target object and the symmetrical object can be mirror symmetry or central symmetry. Therefore, the form of the target object can be determined through one detection process no matter whether the target object is one object or two objects in a symmetrical relation, the efficiency of determining the form of the target object is improved, and the probability of missed detection and false detection of the target object is reduced through comparison of features.
Drawings
In order to more clearly describe the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments of the present application will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a block diagram of a terminal according to an exemplary embodiment of the present application;
FIG. 2 is a flow chart of a method for identifying a morphology of a target object provided by an exemplary embodiment of the present application;
FIG. 3 is a flowchart of a method for identifying a morphology of a target object according to another exemplary embodiment of the present application;
FIG. 4 is a schematic diagram of a training process based on the first feature extraction model and the first fully-connected layer provided in FIG. 3;
FIG. 5 is a schematic diagram of a process for identifying a target object according to the embodiment shown in FIG. 3;
fig. 6 is a block diagram illustrating an apparatus for recognizing a morphology of a target object according to an exemplary embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
In the description of the present application, it is to be understood that the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. In the description of the present application, it is to be noted that, unless otherwise explicitly specified or limited, the terms "connected" and "connected" are to be interpreted broadly, e.g., as being fixed or detachable or integrally connected; can be mechanically or electrically connected; may be directly connected or indirectly connected through an intermediate. The specific meaning of the above terms in the present application can be understood in a specific case by those of ordinary skill in the art. Further, in the description of the present application, "a plurality" means two or more unless otherwise specified. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
In the field of image processing technology, techniques for identifying the human eye include methods based on geometric features. In the scheme application based on the geometric features, certain differences exist between the recognized human eye shape and the recognized expression. Therefore, the use of fixed geometric features may cause more false detections and missed detections.
In a possible implementation manner provided by the embodiment of the present application, the terminal can extract a first feature and a second feature from a target object with symmetric characteristics, where the second feature is a flip feature corresponding to the first feature. According to the scheme, the first characteristic and the second characteristic are fused to obtain the fusion characteristic, and whether the target object is in a normal form or an abnormal form is determined according to the fusion characteristic, so that the efficiency of identifying the form of the target object with the symmetrical characteristic is improved.
In another possible implementation provided by the present application, a deep learning based model can be used for human eye recognition. It should be noted that, because the model based on deep learning has excellent self-learning ability and nonlinear fitting ability, it can automatically extract high-level features from the image through the model, thereby having better robustness and adaptability in the technical application of human eye recognition.
In order to make the solution shown in the embodiments of the present application easy to understand, several terms appearing in the embodiments of the present application will be described below.
ADAM (English: Adaptive motion Estimation, Chinese: Adaptive Moment Estimation) model: the method is an optimization algorithm model, can iteratively update the weight of the neural network based on training data, and designs independent adaptive learning rates for different parameters by calculating the first moment estimation and the second moment estimation of the gradient.
MSE (English: Mean Squared Error, Chinese: Mean Squared Error): the MSE is used for evaluating the change degree of data, and the smaller the value of the MSE is, the better the performance of the model constructed based on the convolutional neural network is.
For example, the method for recognizing the form of the target object according to the embodiment of the present application may be applied to a terminal having a display screen and a function of recognizing the form of the target object. The terminal may include mobile electronic devices such as a mobile phone, a tablet computer, smart glasses, a smart watch, a digital camera, an MP4 player terminal, an MP5 player terminal, a learning machine, a point-to-read machine, an electronic book, an electronic dictionary, a vehicle-mounted terminal, a Virtual Reality (VR) player terminal, or an Augmented Reality (AR) player terminal.
Referring to fig. 1, fig. 1 is a block diagram of a terminal according to an exemplary embodiment of the present application, and as shown in fig. 1, the terminal includes a processor 120 and a memory 140, where the memory 140 stores at least one instruction, and the instruction is loaded and executed by the processor 120 to implement a method for identifying a morphology of a target object according to various method embodiments of the present application. Optionally, the terminal 100 may further include an image capturing component and a display component, which is not limited in this embodiment.
In the present application, the terminal 100 is an electronic device having a morphological function of recognizing a target object. When the terminal 100 acquires a target image of a target object, the terminal 100 can extract a first feature of the target image, the first feature being indicative of the target image; acquiring a second characteristic according to the first characteristic, wherein the second characteristic is a turning characteristic corresponding to the first characteristic; fusing the first feature and the second feature to obtain a fused feature; and determining a result form of the target object according to the fusion characteristics, wherein the result form is a normal form or an abnormal form.
Processor 120 may include one or more processing cores. The processor 120 connects various parts within the overall terminal 100 using various interfaces and lines, and performs various functions of the terminal 100 and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 140 and calling data stored in the memory 140. Optionally, the processor 120 may be implemented in at least one hardware form of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 120 may integrate one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing the content required to be displayed by the display screen; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 120, but may be implemented by a single chip.
The Memory 140 may include a Random Access Memory (RAM) or a Read-Only Memory (ROM). Optionally, the memory 140 includes a non-transitory computer-readable medium. The memory 140 may be used to store instructions, programs, code sets, or instruction sets. The memory 140 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing various method embodiments described below, and the like; the storage data area may store data and the like referred to in the following respective method embodiments.
Referring to fig. 2, fig. 2 is a flowchart illustrating a method for recognizing a morphology of a target object according to an exemplary embodiment of the present application. The method of recognizing the form of the target object can be applied to the terminal described above. The target object has a corresponding symmetric object in the physical world, and the target object and the symmetric object are centrosymmetric or mirror-symmetric. In fig. 2, the method of recognizing the morphology of the target object includes:
step 210, a target image of a target object is acquired.
In the embodiment of the application, the target image may be an image acquired by the terminal in real time or an image acquired by the terminal from a network. In a possible manner, the target image may be a picture taken by the terminal through the image acquisition component, a viewfinder image displayed in real time in the viewfinder frame, or a frame image frame in a video taken by the terminal.
In another possible mode, the target image may be an image file acquired by the terminal from another device from a network or through wired data transmission, wireless data transmission or the like. In this application scenario, the target image may still be a picture or an image frame.
In another classification manner of the target image, the target image may be an image acquired by the image acquisition assembly, or the target image may be an image drawn by the electronic device.
In the embodiment of the application, the terminal acquires the target image of the target object, and the target image can be read from the local or downloaded from the network.
It should be noted that, the target object has a corresponding symmetric object in the physical world, and the target object and the symmetric object are centrosymmetric or mirror-symmetric. For human beings, the body surface organs of the human body are symmetrical in shape. Such as the human eye, hands, arms, legs and feet. Each set of symmetric organs can be represented by left and right, respectively. For example, the left and right eyes may be a pair of mirror symmetric organs.
In the embodiment of the basic application, the human eye is taken as an example for explanation, and the target object may be a left eye or a right eye. On the one hand, when the target object is the left eye, the corresponding symmetric object is the right eye, and the symmetric relationship between the left eye and the right eye is mirror symmetry. On the other hand, when the target object is the right eye, the corresponding symmetric object is the left eye, and the symmetric relationship between the left eye and the right eye is mirror symmetry.
In one possible implementation, the target object and the symmetric object belong to the same subject, and the target object and the symmetric object may appear in the same image.
It should be noted that the target object and the symmetric object are not limited to the body surface organ of the human body. The target object and the symmetric object may also be organs in the human body, for example, the left and right lungs, and the left and right kidneys of the human body. The method disclosed by the application can also be applied to a scene of morphological analysis of the condition of the human organ.
Optionally, the target object and the symmetric object may also be other objects with central symmetry, which is not limited in this application.
Step 220, extracting a first feature of the target image, wherein the first feature is used for indicating the target image.
In the embodiment of the application, the terminal can extract a first feature from the target image, wherein the first feature is used for indicating the target image. It should be noted that, when the operation is performed, the terminal may use a model tool preset therein. The model can be designed by a mathematical modeling method or a model obtained based on deep learning training.
When the model used in the embodiment of the present application is a model obtained based on deep learning training, the first feature may be a feature extracted by any one of the feature extraction layers in the model.
Alternatively, the dimension of the first feature may be a dimension set in advance by a designer. Different designs are made according to the complexity of the target object. In one possible approach, when the image complexity of the target object is low, the dimension of the first feature may be a lower dimension of 16, 32, or 64, etc. In another possible approach, when the image complexity of the target object is high, the dimension of the first feature may be a dimension that is high, such as 128, 256, or 512. The dimension of the first feature is not limited in the embodiment of the present application, and a specific value thereof may be determined according to an actual situation.
And step 230, acquiring a second feature according to the first feature, wherein the second feature is a turning feature corresponding to the first feature.
It should be noted that, in the embodiment of the present application, the terminal is capable of acquiring the second feature according to the first feature. Similar to the processing manner in step 220, the terminal can also obtain the second feature from the first feature through a preset data processing model.
In the embodiments of the present application, two ways of obtaining the second feature according to the first feature are provided.
In a possible manner, the terminal may flip the target image according to the symmetric relationship between the target image and the symmetric image to obtain a flipped image, and process the flipped image using the same model as in step 220 to obtain the second feature. However, the method is large in calculation amount, is not beneficial to quick implementation of the embodiment of the application under the condition of limited hardware resources, and is high in accuracy.
In another possible mode, the terminal can acquire the second feature from the second feature through a neural network based on deep learning, the acquisition mode is fast and efficient, and storage space for storing various models in the terminal and occupation of computing resources in the operation process are reduced.
It should be noted that the second feature may be used to indicate a flipped image of the target image.
And step 240, fusing the first feature and the second feature to obtain a fused feature.
In the embodiment of the application, the terminal can also fuse the first feature and the second feature to obtain a fused feature. In one possible approach, if the first feature and the second feature are both represented in the form of a vector, then the dimensions of the first feature and the second feature are the same. In a given fusion process, the given dimension in the first feature will be fused with the dimension of the same location in the second feature. For example, the dimensions of the first feature and the dimensions of the second feature are both 128. When the first feature and the second feature are fused, the 26 th dimension of the first feature is fused with the 26 th dimension of the second feature to obtain the 26 th dimension of the fused feature.
In a possible implementation manner in the embodiment of the present application, the manner of fusing the first feature and the second feature may be to add the first feature and the second feature to obtain a sum feature. 1/2 of the sum feature is then determined as the fused feature. For example, taking the first feature as (1,2,4,1,2), the second feature as (3,1,2,3,2) as an example, the sum feature as (4,3,6,5,4), and the fusion feature as (2,1.5,3,2.5, 2).
Alternatively, the relationship between the first feature (English: Logit) and the second feature (English: Flip-Logit) and the fusion feature (English: Final Logit) may be expressed as follows:
Final Logit=1/2(Logit+Flip-Logit)
and step 250, determining the result form of the target object according to the fusion characteristics, wherein the result form is a normal form or an abnormal form.
In the embodiment of the application, the terminal can determine the result form of the target object according to the fusion characteristics.
In a possible implementation manner, the terminal can classify the fusion features through the two classifiers and determine the result features of the target object. The two classifiers can determine whether the result form of the target object is a normal form or an abnormal form according to the fusion features. Alternatively, the second classifier may be a Softmax classifier.
In another possible implementation manner, the terminal can also obtain a result form of the target object through another classifier, which is not limited in this embodiment of the application.
In summary, the method for recognizing the form of the target object provided in this embodiment can acquire the target image of the target object, extract the first feature in the target image, acquire the corresponding second feature according to the first feature, fuse the first feature and the second feature together to form a fused feature, and determine the result form of the target object according to the fused feature, where the result form is a normal form or an abnormal form. The target object aimed by the application is provided with the corresponding symmetrical object, and the symmetrical relation between the target object and the symmetrical object can be mirror symmetry or central symmetry. Therefore, the form of the target object can be determined through one detection process no matter whether the target object is one object or two objects in a symmetrical relation, the efficiency of determining the form of the target object is improved, and the probability of missed detection and false detection of the target object is reduced through comparison of features.
Based on the solution disclosed in the previous embodiment, please refer to the following embodiment.
Referring to fig. 3, fig. 3 is a flowchart of a method for recognizing a morphology of a target object according to another exemplary embodiment of the present application. The method of recognizing the form of the target object can be applied to the terminal described above. In fig. 3, the method of recognizing a morphology of a target object includes:
step 310, a target image of a target object is acquired.
In the embodiment of the present application, the execution process of step 310 is the same as the execution process of step 210, and is not described herein again.
And 320, inputting the target image into the first feature extraction model, and acquiring the first feature output by the first feature extraction model.
In the embodiment of the present application, the first feature extraction model is a feature extraction model that is trained by a first training sample, the first training sample is a first training image labeled with a result form, and the first training image and the target image are images belonging to the same appearance form type.
Referring to fig. 4, fig. 4 is a schematic diagram of a training process based on the first feature extraction model and the first fully connected layer provided in fig. 3. In fig. 4, the terminal is capable of performing face detection and face keypoint determination on a first training image 410. Alternatively, the face detection and face keypoint determination may be implemented by a face detection model and a face keypoint detection model that have been trained. After determining the left-eye key point and the right-eye key point in the first training image 410, the terminal cuts out a left-eye sub-image 411 from the first training image 410 according to the left-eye key point; the terminal will crop the right-eye sub-image 412 from the first training image 410 based on the right-eye keypoints. It should be noted that both the left-eye sub-image 411 and the right-eye sub-image 412 can be used as training samples for training the first feature extraction model and the first full link layer (i.e., the first training image 410 actually uses the left-eye sub-image 411 and the right-eye sub-image 412 cropped out therefrom as training samples). In another possible way, if only the left-eye sub-image 411 is included in the first training image 410, or only the right-eye sub-image 412 is included in the first training image 410, the terminal uses only the included sub-image as the training sample.
Alternatively, the cutting mode may be a regular quadrangle with the distance between the eyebrow key point and the eye key point being 1/2 side length and the eye key point being the center.
In fig. 4, the terminal inputs a left-eye sub-image 411 or a right-eye sub-image 412 in one training process. In the training process, the terminal only inputs one picture to the original feature extraction model. Optionally, the original feature extraction model 420 in the embodiment of the present application may be a MobileNetV2 model, and the MobileNetV2 model has the characteristics of high performance and light weight and is suitable for being disposed in a mobile device. The original feature extraction model 420 can extract training features 430, compute the training features 430 through a loss function 440, and optimize using an optimization model.
In the training process shown in fig. 4, the terminal may train the original feature extraction model using the training samples, and the Loss function 440 may be a Softmax Loss function selected to be used as follows:
Figure BDA0002280989490000101
in the Softmax Loss function, x represents the output vector of the MobileNetV2 model, W is the weight vector, b is the offset, and y is the label. After the loss caused by the training sample is solved according to the function, the terminal can optimize the MobileNetV2 model by using a specified optimization model, and optionally, the optimization model can be an ADAM model. After the MobileNetV2 model is trained, the first feature extraction model in the embodiment of the present application can be obtained.
Step 331, a symmetry relationship between the target object and the corresponding symmetric object is determined.
In the embodiments of the present application, the symmetric relationship includes central symmetry and mirror symmetry. The terminal is able to determine a symmetry relationship between the target object and the corresponding symmetric object. For example, the symmetric relationship between the left and right eyes is mirror symmetry. The symmetry relationship between the white part and the black part in the Chinese traditional pattern is central symmetry.
And 332, overturning the second training image according to the symmetrical relation to obtain a corresponding real overturning image.
In the embodiment of the application, the terminal can turn over the second training image according to the determined symmetrical relationship to obtain the corresponding real turning image. It should be noted that, since the image obtained by flipping the second training image is a true flipped image, that is, the true flipped image is flipped again according to the symmetric relationship, and the corresponding second training image can be obtained.
Optionally, in a possible application, the flipping according to mirror symmetry may be Horizontal flipping (english: Horizontal Flip).
Step 333, inputting the real flip image into the first feature extraction model to obtain the real flip feature.
In the embodiment of the application, the terminal inputs the real Flip image into the first feature extraction model, and can extract the real Flip feature (English: Flip-Logit) through the model.
Step 334, labeling the real turning features to the corresponding second training image to obtain a second training sample.
In the embodiment of the application, the terminal can correspondingly store the real turning feature and the second training image. In one possible approach, the terminal may normalize the true flip feature on the second training image to obtain a second training sample.
Step 335, training the original fully-connected layer according to the second training sample and the second loss function to obtain a first fully-connected layer.
In the embodiment of the present application, the terminal is capable of training the original fully-connected layer according to the second training sample and the second loss function. It should be noted that the terminal inputs the second training sample into the first feature extraction model to obtain a first feature, the first feature obtains a predicted turn-over feature through the original fully-connected layer, and the loss function trains the original fully-connected layer through the predicted turn-over feature and the real turn-over feature.
It should be noted that the dimension and the number of layers of the original Fully-connected layer (abbreviated as FC) may be set according to actual use requirements, and this is not limited in the embodiment of the present application. In one possible implementation, the dimensions of the original fully-connected layer may be 32, 64, 128, 256, or 512, among others. The number of layers of the original fully-connected layer may be 1,2,3, or 4, and so on.
In an embodiment of the present application, the second loss function may be an MSE loss function. The MSE loss function may be as follows:
Figure BDA0002280989490000111
wherein the content of the first and second substances,
Figure BDA0002280989490000112
predicted flip characteristics for the original fully-connected layer, yiIs a true flip feature.
It should be noted that, in this embodiment of the present application, the terminal may complete training of the first fully-connected layer by performing steps 331 to 335, and combine the training process for the first feature extraction model described above, so as to implement training of the deep learning based neural network shown in this embodiment of the present application. In the optimization process, the network weight of the first feature classification model is unchanged, and the ADAM algorithm optimization model only optimizes the original full-connection layer to finally obtain the first full-connection layer.
In the training process shown in fig. 4, the original fully-connected layer 450, the predicted flip feature 460 and the second penalty function 470 are also included.
Optionally, in this embodiment of the present application, the output dimension of the first fully-connected layer is the same as the dimension of the input first feature.
In one possible implementation, the terminal can encapsulate the first feature extraction model and the first fully connected layer as a target neural network. In a practical application scenario, the terminal inputs a target image into a target neural network, and a result form of the target image can be obtained.
And step 340, training the target neural network through the third training sample.
In the embodiment of the present application, the third training sample is a third training image labeled with a result form, and the third training image and the target image are images belonging to the same appearance form type. The learning rate of the training target neural network is a target learning rate, the target learning rate is less than the learning rate of the training first feature extraction model, and the target learning rate is less than the learning rate of the training into the first fully-connected layer.
In the embodiment of the application, the terminal can fine-tune the target neural network through the third training sample, so that after the first feature extraction model and the first full-connection layer are connected in series, the weight fine-tuning is integrally performed, and the accuracy of the shape of the result of identifying the target object by the target neural network is higher.
Step 350, inputting the first feature into the first fully-connected layer to obtain a second feature.
The first full-connection layer is a full-connection layer which is trained through a second training sample, a second training image with a real overturning characteristic is marked on the second training sample, the second training image and the target image belong to the same appearance form type, and the output dimensionality of the first full-connection layer is equal to the dimensionality of the first characteristic.
And step 360, fusing the first feature and the second feature to obtain a fused feature.
In the embodiment of the present application, the execution process of step 360 is the same as the execution process of step 240, and is not described herein again.
Step 370, determining the result morphology of the target object according to the fusion characteristics.
In the embodiment of the present application, the execution process of step 370 is the same as the execution process of step 250, and is not described herein again.
It should be noted that, the application can encapsulate the target neural network and the two classifiers to obtain an integral form recognition model. Referring to fig. 5, fig. 5 is a schematic diagram of a process for identifying a target object according to the embodiment shown in fig. 3. After the target image 510 is input into the form recognition model, a human eye image 511 is obtained first, and the human eye image 511 is input into the first feature extraction model 520 to obtain the first feature 531. The human eye image 511 is input into the first feature extraction model 520, and then passes through the first full connection layer 521 to obtain a second feature 532, the first feature 531 and the second feature 532 are fused to obtain a fused feature 533, and the fused feature 533 passes through the two classifiers 540 to obtain a result form 550. When the target object is a human eye, the resulting morphology 550 indicates an open-eye morphology or indicates a closed-eye morphology. The morphology recognition model includes a first feature extraction model 520, a first fully connected layer 521, and a classifier 540.
In one possible application scenario of the embodiment of the present application, when the target image is a through image of the terminal and the resultant form is an open-eye form, the through image is captured. In the scene, the method and the device can capture the image of the person when the eye is opened, and the capability of the mobile terminal for capturing the eye-opening image is improved.
In another possible application scenario of the embodiment of the application, when the target image is an image shot by a terminal and the result form is the closed-eye form, a reminding message is displayed, and the reminding message is used for reminding that the closed-eye condition occurs in the target image. In the scene, the terminal can remind the user of which images contain the eye closing condition in time, so that the user can conveniently perform subsequent processing.
In another possible application scenario of the embodiment of the application, when the target image is an image shot by a terminal and the resulting form is a closed-eye form, the target image is moved to a closed-eye image album. In the scene, the terminal can automatically arrange the images in the closed-eye form into an album, so that the user can rapidly process the images in batches.
In another possible application scenario of the embodiment of the application, when a target image is used as an image of a human face unlocking terminal, a result form is obtained; when the result form is the eye-closing form, refusing to unlock the terminal; and unlocking the terminal when the result form is the eye opening form and the target image is matched with the preset image template. In the scene, the terminal can prevent the user from being illegally unlocked by others in the eye-closing state, and the information safety in the terminal is protected.
It should be noted that the first training image, the second training image, and the third training image in the embodiment of the present application may be different training samples, so as to improve the overall recognition capability of the method provided by the present application.
In summary, the present embodiment can acquire a target image of a target object, extract a first feature in the target image, acquire a corresponding second feature according to the first feature, fuse the first feature and the second feature together to form a fused feature, and determine a result form of the target object according to the fused feature, where the result form is a normal form or an abnormal form. The target object aimed by the application is provided with the corresponding symmetrical object, and the symmetrical relation between the target object and the symmetrical object can be mirror symmetry or central symmetry. Therefore, the form of the target object can be determined through one detection process no matter whether the target object is one object or two objects in a symmetrical relation, the efficiency of determining the form of the target object is improved, and the probability of missed detection and false detection of the target object is reduced through comparison of features.
Optionally, the embodiment of the application can also effectively select and filter the closed-eye picture by detecting the state of the eyes in real time, so that the photographing experience of the user is improved. On one hand, the human eye state recognition model provided by the invention is based on a MobileNet V2 network, so that the human eye state of the mobile terminal can be accurately judged in real time. On the other hand, the difference of the left eye and the right eye is not considered in the existing human eye state recognition model, the left eye and the right eye of the same person are relatively mirror-symmetrical except for certain difference of angles and postures, although certain compensation can be made in a mode of increasing data quantity and enhancing data, the difference caused by mirror symmetry still has certain influence on precision. In order to solve the problem, a model can be trained for the left eye and the right eye respectively, then the left eye or the right eye is judged in the actual use process, and then the left eye or the right eye is input into the corresponding model to carry out human eye state recognition, however, not only two models are needed, but also a judgment module needs to be added, and the requirements of instantaneity and light weight of a mobile terminal cannot be well met. According to the method provided by the invention, the original characteristics of the human eye picture are extracted by using a MobileNet V2 model, then the turning characteristics of the human eye picture can be obtained by using the mapping relation between the original characteristics and the turning characteristics, and finally the original characteristics and the turning characteristics are fused to obtain the final characteristics, so that the final characteristics can simultaneously contain the information of the original picture and the information of the mirror symmetry picture of the original picture no matter whether the left eye region picture or the right eye region picture is input, the robustness and the adaptability of the model are improved, the extraction and the fusion of all the characteristics are finished in the same model, no redundant judging module is provided, and the requirements of a mobile terminal on instantaneity, light weight and high performance are met.
The following are embodiments of the apparatus of the present application that may be used to perform embodiments of the method of the present application. For details which are not disclosed in the embodiments of the apparatus of the present application, reference is made to the embodiments of the method of the present application.
Referring to fig. 6, fig. 6 is a block diagram illustrating a structure of an apparatus for recognizing a morphology of a target object according to an exemplary embodiment of the present application. The means for recognizing the form of the target object may be implemented as all or a part of the terminal by software, hardware, or a combination of both. The target object has a corresponding symmetric object in the physical world, and the target object and the symmetric object are centrosymmetric or mirror-symmetric. The device includes:
an image obtaining module 610, configured to obtain a target image of the target object;
a first feature extraction module 620, configured to extract a first feature of the target image, where the first feature is used to indicate the target image;
a second feature extraction module 630, configured to obtain a second feature according to the first feature, where the second feature is a turning feature corresponding to the first feature;
a feature fusion module 640, configured to fuse the first feature and the second feature to obtain a fused feature;
a morphology determining module 650, configured to determine a resulting morphology of the target object according to the fusion feature, where the resulting morphology is a normal morphology or an abnormal morphology.
In an optional embodiment, the first feature extraction module 620 is configured to input the target image into a first feature extraction model, and obtain the first feature output by the first feature extraction model; the first feature extraction model is a feature extraction model which is trained by a first training sample, the first training sample is a first training image marked with the result form, and the first training image and the target image are images of the same appearance form type.
In an optional embodiment, the second feature extraction module 630 is configured to input the first feature into a first fully-connected layer to obtain the second feature; the first full-connection layer is a full-connection layer which is trained through a second training sample, a second training image with a real overturning feature is marked on the second training sample, the second training image and the target image belong to the same appearance form type, and the output dimension of the first full-connection layer is equal to the dimension of the first feature.
In an optional embodiment, the apparatus further comprises an execution module, configured to determine a symmetry relationship between the target object and the corresponding symmetric object, where the symmetry relationship includes central symmetry and mirror symmetry; turning the second training image according to the symmetrical relation to obtain a corresponding real turning image; inputting the real overturning image into the first feature extraction model to obtain the real overturning feature; labeling the real turning features to the corresponding second training images to obtain second training samples; and training an original full-connected layer according to the second training sample and a second loss function to obtain the first full-connected layer.
In an alternative embodiment, the first feature extraction model and the first fully connected layer involved in the apparatus are encapsulated as a target neural network.
In an optional embodiment, the apparatus further includes a fine-tuning module, configured to train the target neural network through a third training sample, where the third training sample is a third training image labeled with the result form, and the third training image and the target image are images belonging to the same appearance form type; and training the target neural network, wherein the learning rate of training the target neural network is a target learning rate, the target learning rate is less than the learning rate of training the first feature extraction model, and the target learning rate is less than the learning rate of training the first full connection layer.
In an alternative embodiment, when the target object related to the apparatus is human eyes, the relationship between the target object and the feature object is mirror symmetry, the abnormal form is a closed-eye form, the normal form is an open-eye form, and the execution module is further configured to capture the framing image when the target image is a framing image of a terminal and the resultant form is the open-eye form; and/or when the target image is an image shot by a terminal and the result form is the eye closing form, displaying a reminding message, wherein the reminding message is used for reminding the eye closing condition in the target image; and/or when the target image is an image shot by a terminal and the result form is the closed-eye form, moving the target image to a closed-eye image album; and/or when the target image is used as an image of a face unlocking terminal, acquiring the result form; when the result form is the eye-closing form, refusing to unlock the terminal; and unlocking the terminal when the result form is the eye opening form and the target image is matched with a preset image template.
In an optional embodiment, the feature fusion module 640 is configured to add the first feature and the second feature to obtain a sum feature; determining one-half of the sum feature as the fusion feature.
The present embodiments also provide a computer-readable medium, which stores at least one instruction, where the at least one instruction is loaded and executed by the processor to implement the method for identifying the morphology of the target object according to the above embodiments.
It should be noted that: in the above embodiment, when the apparatus for recognizing a form of a target object executes the method for recognizing a form of a target object, only the division of the above functional modules is taken as an example, and in practical applications, the functions may be distributed to different functional modules according to needs, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the functions described above. In addition, the apparatus for identifying a form of a target object and the method embodiment for identifying a form of a target object provided in the above embodiments belong to the same concept, and specific implementation processes thereof are detailed in the method embodiment and are not described herein again.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the implementation of the present application and is not intended to limit the present application, and any modifications, equivalents, improvements, etc. made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (11)

1. A method of recognizing a morphology of a target object, wherein the target object has a corresponding symmetric object in a physical world, and the target object and the symmetric object are centrosymmetric or mirror-symmetric, the method comprising:
acquiring a target image of the target object;
extracting a first feature of the target image, wherein the first feature is used for indicating the target image;
acquiring a second characteristic according to the first characteristic, wherein the second characteristic is a turning characteristic corresponding to the first characteristic;
fusing the first feature and the second feature to obtain a fused feature;
determining a resulting morphology of the target object according to the fusion features, the resulting morphology being a normal morphology or an abnormal morphology.
2. The method of claim 1, wherein extracting the first feature of the target image comprises:
inputting the target image into a first feature extraction model, and acquiring the first feature output by the first feature extraction model;
the first feature extraction model is a feature extraction model which is trained by a first training sample, the first training sample is a first training image marked with the result form, and the first training image and the target image are images of the same appearance form type.
3. The method of claim 2, wherein the obtaining second features from the first features comprises:
inputting the first characteristic into a first full-connection layer to obtain a second characteristic;
the first full-connection layer is a full-connection layer which is trained through a second training sample, a second training image with a real overturning feature is marked on the second training sample, the second training image and the target image belong to the same appearance form type, and the output dimension of the first full-connection layer is equal to the dimension of the first feature.
4. The method of claim 3, wherein prior to entering the first feature into the first fully-connected layer, the method further comprises:
determining a symmetry relationship between the target object and the corresponding symmetric object, the symmetry relationship including central symmetry and mirror symmetry;
turning the second training image according to the symmetrical relation to obtain a corresponding real turning image;
inputting the real overturning image into the first feature extraction model to obtain the real overturning feature;
labeling the real turning features to the corresponding second training images to obtain second training samples;
and training an original full-connected layer according to the second training sample and a second loss function to obtain the first full-connected layer.
5. The method of claim 4, wherein the first feature extraction model and the first fully-connected layer are encapsulated as a target neural network.
6. The method of claim 5, further comprising:
training the target neural network through a third training sample, wherein the third training sample is a third training image marked with the result form, and the third training image and the target image belong to the same appearance form type;
and training the target neural network, wherein the learning rate of training the target neural network is a target learning rate, the target learning rate is less than the learning rate of training the first feature extraction model, and the target learning rate is less than the learning rate of training the first full connection layer.
7. The method according to any one of claims 1 to 6, wherein when the target object is a human eye, the relationship between the target object and the feature object is a mirror symmetry, the abnormal morphology is a closed-eye morphology, and the normal morphology is an open-eye morphology, the method further comprising:
when the target image is a framing image of a terminal and the result form is the eye opening form, shooting the framing image;
and/or the presence of a gas in the gas,
when the target image is an image shot by a terminal and the result form is the eye closing form, displaying a reminding message, wherein the reminding message is used for reminding the eye closing condition in the target image;
and/or the presence of a gas in the gas,
when the target image is an image shot by a terminal and the result form is the closed-eye form, moving the target image to a closed-eye image album;
and/or the presence of a gas in the gas,
when the target image is used as an image of a face unlocking terminal, acquiring the result form;
when the result form is the eye-closing form, refusing to unlock the terminal;
and unlocking the terminal when the result form is the eye opening form and the target image is matched with a preset image template.
8. The method according to any one of claims 1 to 6, wherein said fusing said first feature and said second feature to obtain a fused feature comprises:
adding the first characteristic and the second characteristic to obtain a sum characteristic;
determining one-half of the sum feature as the fusion feature.
9. An apparatus for recognizing a morphology of a target object, wherein the target object has a corresponding symmetric object in a physical world, and the target object and the symmetric object are centrosymmetric or mirror-symmetric, the apparatus comprising:
the image acquisition module is used for acquiring a target image of the target object;
a first feature extraction module, configured to extract a first feature of the target image, where the first feature is used to indicate the target image;
the second feature extraction module is used for acquiring a second feature according to the first feature, wherein the second feature is a turning feature corresponding to the first feature;
the feature fusion module is used for fusing the first feature and the second feature to obtain a fused feature;
and the form determining module is used for determining the result form of the target object according to the fusion characteristics, wherein the result form is a normal form or an abnormal form.
10. A terminal, characterized in that the terminal comprises a processor, a memory connected to the processor, and program instructions stored on the memory, which when executed by the processor implement a method of identifying a morphology of a target object according to any one of claims 1 to 8.
11. A computer readable storage medium having stored thereon program instructions which, when executed by a processor, implement a method of identifying a morphology of a target object according to any one of claims 1 to 8.
CN201911141195.0A 2019-11-20 2019-11-20 Method, device, terminal and storage medium for identifying form of target object Active CN110866508B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911141195.0A CN110866508B (en) 2019-11-20 2019-11-20 Method, device, terminal and storage medium for identifying form of target object

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911141195.0A CN110866508B (en) 2019-11-20 2019-11-20 Method, device, terminal and storage medium for identifying form of target object

Publications (2)

Publication Number Publication Date
CN110866508A true CN110866508A (en) 2020-03-06
CN110866508B CN110866508B (en) 2023-06-27

Family

ID=69655610

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911141195.0A Active CN110866508B (en) 2019-11-20 2019-11-20 Method, device, terminal and storage medium for identifying form of target object

Country Status (1)

Country Link
CN (1) CN110866508B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114792355A (en) * 2022-06-24 2022-07-26 北京百度网讯科技有限公司 Virtual image generation method and device, electronic equipment and storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070014431A1 (en) * 2005-06-10 2007-01-18 Hammoud Riad I System and method for detecting an eye
CN101059836A (en) * 2007-06-01 2007-10-24 华南理工大学 Human eye positioning and human eye state recognition method
US20140285768A1 (en) * 2011-10-24 2014-09-25 Iriss Medical Technologies Limited System and Method for Identifying Eye Conditions
CN104346621A (en) * 2013-07-30 2015-02-11 展讯通信(天津)有限公司 Method and device for creating eye template as well as method and device for detecting eye state
CN104463081A (en) * 2013-09-16 2015-03-25 展讯通信(天津)有限公司 Detection method of human eye state
CN105095879A (en) * 2015-08-19 2015-11-25 华南理工大学 Eye state identification method based on feature fusion
CN108615014A (en) * 2018-04-27 2018-10-02 京东方科技集团股份有限公司 A kind of detection method of eye state, device, equipment and medium
CN108921117A (en) * 2018-07-11 2018-11-30 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN109711309A (en) * 2018-12-20 2019-05-03 北京邮电大学 A kind of method whether automatic identification portrait picture closes one's eyes
CN110163160A (en) * 2019-05-24 2019-08-23 北京三快在线科技有限公司 Face identification method, device, equipment and storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070014431A1 (en) * 2005-06-10 2007-01-18 Hammoud Riad I System and method for detecting an eye
CN101059836A (en) * 2007-06-01 2007-10-24 华南理工大学 Human eye positioning and human eye state recognition method
US20140285768A1 (en) * 2011-10-24 2014-09-25 Iriss Medical Technologies Limited System and Method for Identifying Eye Conditions
CN104346621A (en) * 2013-07-30 2015-02-11 展讯通信(天津)有限公司 Method and device for creating eye template as well as method and device for detecting eye state
CN104463081A (en) * 2013-09-16 2015-03-25 展讯通信(天津)有限公司 Detection method of human eye state
CN105095879A (en) * 2015-08-19 2015-11-25 华南理工大学 Eye state identification method based on feature fusion
CN108615014A (en) * 2018-04-27 2018-10-02 京东方科技集团股份有限公司 A kind of detection method of eye state, device, equipment and medium
CN108921117A (en) * 2018-07-11 2018-11-30 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN109711309A (en) * 2018-12-20 2019-05-03 北京邮电大学 A kind of method whether automatic identification portrait picture closes one's eyes
CN110163160A (en) * 2019-05-24 2019-08-23 北京三快在线科技有限公司 Face identification method, device, equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114792355A (en) * 2022-06-24 2022-07-26 北京百度网讯科技有限公司 Virtual image generation method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN110866508B (en) 2023-06-27

Similar Documents

Publication Publication Date Title
CN109697416B (en) Video data processing method and related device
CN112733802B (en) Image occlusion detection method and device, electronic equipment and storage medium
WO2023098128A1 (en) Living body detection method and apparatus, and training method and apparatus for living body detection system
CN102332095B (en) Face motion tracking method, face motion tracking system and method for enhancing reality
CN110223322B (en) Image recognition method and device, computer equipment and storage medium
CN110569808A (en) Living body detection method and device and computer equipment
CN111461089A (en) Face detection method, and training method and device of face detection model
CN108805047A (en) A kind of biopsy method, device, electronic equipment and computer-readable medium
CN107844742B (en) Facial image glasses minimizing technology, device and storage medium
KR20220042301A (en) Image detection method and related devices, devices, storage media, computer programs
CN113449623B (en) Light living body detection method based on deep learning
WO2024001095A1 (en) Facial expression recognition method, terminal device and storage medium
CN109670517A (en) Object detection method, device, electronic equipment and target detection model
CN111488774A (en) Image processing method and device for image processing
CN111259757B (en) Living body identification method, device and equipment based on image
CN111325107A (en) Detection model training method and device, electronic equipment and readable storage medium
CN116048244A (en) Gaze point estimation method and related equipment
CN112560584A (en) Face detection method and device, storage medium and terminal
CN110866508B (en) Method, device, terminal and storage medium for identifying form of target object
CN113570615A (en) Image processing method based on deep learning, electronic equipment and storage medium
CN112766065A (en) Mobile terminal examinee identity authentication method, device, terminal and storage medium
CN112381749A (en) Image processing method, image processing device and electronic equipment
CN115937938A (en) Training method of face identity recognition model, face identity recognition method and device
CN115984978A (en) Face living body detection method and device and computer readable storage medium
CN115082992A (en) Face living body detection method and device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant