CN115937766A - Identification method, identification device, electronic equipment and computer-readable storage medium - Google Patents

Identification method, identification device, electronic equipment and computer-readable storage medium Download PDF

Info

Publication number
CN115937766A
CN115937766A CN202211512753.1A CN202211512753A CN115937766A CN 115937766 A CN115937766 A CN 115937766A CN 202211512753 A CN202211512753 A CN 202211512753A CN 115937766 A CN115937766 A CN 115937766A
Authority
CN
China
Prior art keywords
image frame
image
face
file
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211512753.1A
Other languages
Chinese (zh)
Inventor
祖春山
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BOE Technology Group Co Ltd
Original Assignee
BOE Technology Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BOE Technology Group Co Ltd filed Critical BOE Technology Group Co Ltd
Priority to CN202211512753.1A priority Critical patent/CN115937766A/en
Publication of CN115937766A publication Critical patent/CN115937766A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The embodiment of the application provides an identification method, an identification device, electronic equipment and a computer readable storage medium, and relates to the technical field of computers. The method comprises the following steps: and performing image recognition on the file to be analyzed, determining a target image frame in the file to be analyzed, and performing feature recognition on the first object based on the target image frame. Therefore, feature recognition is only needed to be carried out on the first object based on the target image frame, feature recognition is not needed to be carried out on other image frames in the file to be analyzed, on one hand, the calculated amount of feature recognition is reduced, and the feature recognition efficiency is improved, on the other hand, the target image frame is the image frame of which the image quality parameters meet the preset requirements, and therefore feature recognition accuracy can be improved based on the image frame of which the image quality parameters meet the requirements.

Description

Identification method, identification device, electronic equipment and computer-readable storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to an identification method, an identification device, an electronic device, and a computer-readable storage medium.
Background
Image recognition is an important area of artificial intelligence. Image recognition, which refers to a technique for processing, analyzing and understanding images by a computer to recognize various different patterns of objects and objects, is a practical application of applying a deep learning algorithm. Image recognition such as face recognition, article recognition, and the like; the face recognition is mainly applied to security check, identity verification, mobile payment and the like; the commodity identification is mainly applied to the commodity circulation process, such as the unmanned retail field of unmanned goods shelves, intelligent retail cabinets and the like.
However, the inventor finds that, in the related art, the image recognition has a large calculation amount and is inefficient.
Disclosure of Invention
The present application aims to solve at least one of the above technical drawbacks, particularly, the technical drawbacks of large computation amount and low image recognition efficiency.
According to an aspect of the present application, there is provided an identification method, including:
acquiring a file to be analyzed, wherein the file to be analyzed is a multimedia file;
carrying out image recognition on the file to be analyzed, and determining a target image frame in the file to be analyzed; the target image frame is an image frame containing a first object, and the image quality parameter of the image frame meets the preset requirement;
feature recognition is performed on the first object based on the target image frame.
Optionally, the performing feature recognition on the first object based on the target image frame includes:
acquiring an object feature of the first object based on the target image frame;
and determining whether the first object is a target object in a preset characteristic range according to the object characteristics.
Optionally, the performing image recognition on the file to be analyzed to determine a target image frame in the file to be analyzed includes:
performing face detection processing on the image frames, and determining a first image frame containing a face image;
performing face tracking processing on the first image frame, and determining a second image frame containing the same face image in the first image frame;
determining a target image frame in the second image frame.
Optionally, the performing the face tracking processing on the first image frame and determining a second image frame containing the same face image in the first image frame includes:
comparing the position characteristics and the morphological characteristics of the face image in the first image frame, and determining that the first image frame containing the face image is the second image frame under the condition that the position characteristics and the morphological characteristics meet corresponding preset conditions;
wherein, the shape characteristics at least comprise color characteristics and shape characteristics.
Optionally, comparing the position features of the face image in the first image frame includes:
predicting the first image frame through a Kalman filtering algorithm to obtain predicted face coordinates in the first image frame;
and comparing the predicted face coordinates with first face coordinates in a subsequent image frame, and determining that the predicted face coordinates and the first face coordinates meet a matching relation.
Optionally, comparing the morphological features of the face image in the first image frame includes:
and comparing morphological features of the face image in the first image frame, and determining whether the similarity of the morphological features is greater than a first preset threshold.
Optionally, the determining a target image frame in the second image frame includes:
inputting the second image frames into a quality detection model, and determining an image quality score of each second image frame;
and determining the second image frame with the largest image quality score as the target image frame.
Optionally, after determining the target image frame in the second image frames, the method further includes:
carrying out super-resolution denoising processing on the target image frame to obtain a denoised image frame with noise removed and resolution greater than a second preset threshold;
the acquiring of the object feature of the first object includes:
and acquiring the object characteristics of a first object in the de-noised image frame.
Alternatively, in the case where an occlusion region is included in the face image,
the determining whether the first object is a target object within a preset feature range according to the object features includes:
comparing the object features with image features in a simulation database to determine whether the first object is the target object;
wherein the simulation database comprises a face image with a simulation shading area.
Optionally, the face detection processing and the super-resolution denoising processing are respectively realized by a face detection model and a super-resolution model;
the method further comprises the following steps:
and carrying out quantization processing on the initial face detection model, the initial hyper-resolution model and the initial quality detection model to obtain the face detection model, the hyper-resolution model and the quality detection model with the accuracy greater than a third preset threshold and the prediction speed greater than a fourth preset threshold.
According to another aspect of the present application, there is provided an identification apparatus, the apparatus including:
the file acquisition module is used for acquiring a file to be analyzed, wherein the file to be analyzed is a multimedia file;
the determining module is used for carrying out image recognition on the file to be analyzed and determining a target image frame in the file to be analyzed; the target image frame is an image frame containing a first object, and the image frame with the image quality parameter meeting the preset requirement;
and the identification module is used for carrying out feature identification on the first object based on the target image frame.
According to another aspect of the present application, there is provided an electronic device including: memory, processor and computer program stored on the memory, characterized in that the processor executes the computer program to implement the steps of the identification method of any one of the first aspect of the present application.
For example, in a third aspect of the present application, there is provided a computing device comprising: the processor, the memory and the communication interface complete mutual communication through the communication bus;
the memory is used for storing at least one executable instruction, and the executable instruction enables the processor to execute the operation corresponding to the identification method as shown in the first aspect of the application.
According to yet another aspect of the present application, a computer-readable storage medium is provided, the computer program, when executed by a processor, implementing the steps of the identification method of any one of the first aspects of the present application.
For example, in a fourth aspect of the embodiments of the present application, a computer-readable storage medium is provided, on which a computer program is stored, and the computer program, when executed by a processor, implements the identification method shown in the first aspect of the present application.
According to an aspect of the application, a computer program product or computer program is provided, comprising computer instructions, the computer instructions being stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the method provided in the various alternative implementations of the first aspect described above.
The beneficial effect that technical scheme that this application provided brought is:
in the embodiment of the application, image recognition is carried out on the file to be analyzed, a target image frame in the file to be analyzed is determined, and feature recognition is carried out on the first object based on the target image frame; therefore, feature recognition is only needed to be carried out on the first object based on the target image frame, feature recognition is not needed to be carried out on other image frames in the file to be analyzed, on one hand, the calculated amount of feature recognition is reduced, and the feature recognition efficiency is improved, on the other hand, the target image frame is the image frame of which the image quality parameters meet the preset requirements, and therefore feature recognition accuracy can be improved based on the image frame of which the image quality parameters meet the requirements.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings used in the description of the embodiments of the present application will be briefly described below.
Fig. 1 is a schematic diagram of a system architecture of an identification method according to an embodiment of the present application;
fig. 2 is a schematic flowchart of an identification method according to an embodiment of the present disclosure;
fig. 3 is a second schematic flowchart of an identification method according to an embodiment of the present application;
fig. 4 is a schematic model diagram of an identification method according to an embodiment of the present application;
fig. 5 is a second schematic model diagram of an identification method according to an embodiment of the present application;
fig. 6 is a third schematic flowchart of an identification method according to an embodiment of the present application;
fig. 7 is one of application scenarios of an identification method according to an embodiment of the present application;
fig. 8 is a second schematic view of an application scenario of an identification method according to an embodiment of the present application;
fig. 9 is a third schematic diagram of a recognition method according to an embodiment of the present application;
FIG. 10 is a fourth exemplary model of an identification method according to an embodiment of the present disclosure;
fig. 11 is a fourth schematic flowchart of an identification method according to an embodiment of the present application;
fig. 12 is a fifth flowchart illustrating an identification method according to an embodiment of the present application;
fig. 13 is a schematic structural diagram of an identification apparatus according to an embodiment of the present disclosure;
fig. 14 is a schematic structural diagram of an identified electronic device according to an embodiment of the present application.
Detailed Description
Embodiments of the present application are described below in conjunction with the drawings in the present application. It should be understood that the embodiments set forth below in connection with the drawings are exemplary descriptions for explaining technical solutions of the embodiments of the present application, and do not limit the technical solutions of the embodiments of the present application.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, information, data, steps, operations, elements, and/or components, but do not preclude the presence or addition of other features, information, data, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. The term "and/or" as used herein indicates at least one of the items defined by the term, e.g., "a and/or B" can be implemented as "a", or as "B", or as "a and B".
To make the objects, technical solutions and advantages of the present application more clear, the following detailed description of the embodiments of the present application will be made with reference to the accompanying drawings.
At least part of the content of the identification method provided by the embodiment of the application relates to the fields of machine learning and the like in the field of artificial intelligence, and also relates to various fields of Cloud technology, such as Cloud computing in Cloud technology, cloud service and related data computing processing in the field of big data.
Artificial Intelligence (AI) is a theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and expand human Intelligence, perceive the environment, acquire knowledge and use the knowledge to obtain the best results. In other words, artificial intelligence is a comprehensive technique of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that can react in a manner similar to human intelligence. Artificial intelligence is the research of the design principle and the realization method of various intelligent machines, so that the machines have the functions of perception, reasoning and decision making.
The artificial intelligence technology is a comprehensive subject and relates to the field of extensive technology, namely the technology of a hardware level and the technology of a software level. The artificial intelligence base technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and the like.
Machine Learning (ML) is a multi-domain cross subject, and relates to multiple subjects such as probability theory, statistics, approximation theory, convex analysis and algorithm complexity theory. The method specially studies how a computer simulates or realizes the learning behavior of human beings so as to acquire new knowledge or skills and reorganize the existing knowledge structure to continuously improve the performance of the computer. Machine learning is the core of artificial intelligence, is the fundamental approach for computers to have intelligence, and is applied to all fields of artificial intelligence. Machine learning and deep learning generally include techniques such as artificial neural networks, belief networks, reinforcement learning, transfer learning, inductive learning, and formal education learning.
To further explain the technical solutions provided by the embodiments of the present application, the following detailed description is made with reference to the accompanying drawings and the specific embodiments. Although the embodiments of the present application provide method steps as shown in the following embodiments or figures, more or fewer steps may be included in the method based on conventional or non-inventive efforts. In steps where no necessary causal relationship exists logically, the order of execution of the steps is not limited to that provided by the embodiments of the present application.
Reference is first made to fig. 1, which is a system architecture diagram of an identification method provided in an embodiment of the present application. The system may include a server 101 and a cluster of terminals, wherein the server 101 may be considered as a background server for the identification process.
The terminal cluster may include: terminal 102, terminal 103, terminal 104, \8230;, where a client supporting an identification process may be installed in the terminal. There may be a communication connection between the terminals, for example, a communication connection between terminal 102 and terminal 103, and a communication connection between terminal 103 and terminal 104.
Meanwhile, the server 101 may provide a service for the terminal cluster through a communication connection function, and any terminal in the terminal cluster may have a communication connection with the server 101, for example, a communication connection exists between the terminal 102 and the server 101, and a communication connection exists between the terminal 103 and the server 101, where the communication connection is not limited to a connection manner, and may be directly or indirectly connected through a wired communication manner, may also be directly or indirectly connected through a wireless communication manner, and may also be through other manners.
The communicatively coupled network may be a wide area network or a local area network, or a combination thereof. The application is not limited thereto.
In the embodiment of the application, image recognition is carried out on the file to be analyzed, a target image frame in the file to be analyzed is determined, and feature recognition is carried out on the first object based on the target image frame; therefore, feature recognition is only needed to be carried out on the first object based on the target image frame, feature recognition is not needed to be carried out on other image frames in the file to be analyzed, on one hand, the calculated amount of feature recognition is reduced, and the feature recognition efficiency is improved, on the other hand, the target image frame is the image frame of which the image quality parameters meet the preset requirements, and therefore feature recognition accuracy can be improved based on the image frame of which the image quality parameters meet the requirements.
The method provided by the embodiment of the present application can be executed by a computer device, which includes but is not limited to a terminal (also including the user terminal described above) or a server (also including the server 101 described above). The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a network service, cloud communication, middleware service, a domain name service, a security service, a CDN, a big data and artificial intelligence platform, and the like. The terminal may be, but is not limited to, a smart phone, a tablet computer, a laptop computer, a desktop computer, a smart speaker, a smart watch, and the like. The terminal and the server may be directly or indirectly connected through wired or wireless communication, and the application is not limited herein.
The embodiments of the present application are not intended to be limiting. The functions that can be implemented by each device in the application scenario shown in fig. 1 will be described in the following method embodiments, and will not be described in detail herein.
The embodiment of the present application provides a possible implementation manner, and the scheme may be executed by any electronic device, and optionally, any electronic device may be a server device with an identification capability, or may also be a device or a chip integrated on these devices. As shown in fig. 2, which is a schematic flow chart of an identification method provided in an embodiment of the present application, the method includes the following steps:
step S201: and acquiring a file to be analyzed, wherein the file to be analyzed is a multimedia file.
Optionally, the identification method of the embodiment of the present application may be applied to an application scenario in which image identification is performed on a file to be analyzed.
Specifically, the file to be analyzed is a multimedia file, for example, the file to be analyzed may include a video file, an image file, and the like.
The video file can be a video acquired in real time, for example, in some road surface monitoring scenes, the video file can be a video of vehicles and pedestrians acquired in real time by a camera installed on a road; for another example, in some product processing scenes, the video file may be a processing video of each process of the product, which is collected in real time by a camera installed on the production line; for another example, in a scene of area monitoring in real life, the video file may be a video of a person condition collected by a camera installed in a monitoring area, and the like. The video file may also be a video stored in a database, such as a movie, a drama video, a game video, a life scene video, and the video stored in the database after the real-time acquisition, and so on.
The image file may be at least one picture, for example, the image file may be a plurality of pictures taken continuously, or may also be a plurality of pictures taken discontinuously, and so on.
As described above, the file to be analyzed may be a multimedia file in multiple scenes, which is not limited in this embodiment of the present application.
In some optional embodiments, the file to be analyzed may be obtained by uploading through a terminal device, for example, the terminal device may be an image acquisition device that acquires a video image in real time; in addition, the file to be analyzed can be obtained from a preset database, and the like.
Step S202: carrying out image recognition on the file to be analyzed, and determining a target image frame in the file to be analyzed; the target image frame is an image frame containing a first object, and the image quality parameter of the image frame meets the preset requirement.
In some alternative embodiments, the first object may be any object included in the image frame. Alternatively, the first object may be an object, a person object, an animal object, a scene element object in a scene, etc. in the image frame.
For example, an object such as a vehicle or at least one component of a vehicle on a road, a product or at least one part of a product on a production line, a table and a chair in a living scene, a water cup, etc. A person object such as a target person or the whole body, face, head, hands, feet, etc. of the target person. Animal subjects such as kittens, puppies, elephants, lions, and the like. Scene element objects such as sky, clouds, desert, bushes, rain, snow, wind, etc., reflect objects of a scene.
It should be noted that the first object may be a real object or a virtual object, such as a virtual object (e.g., a three-dimensional model), a virtual character (e.g., an animation character), a virtual animal, and a virtual scene element (e.g., a virtual fog in a game scene, a virtual cloud layer, a virtual jungle, etc.).
The target image frame is an image frame of which the image quality parameter meets a preset requirement in an image frame containing a first object. The image quality parameter may be a parameter for characterizing the image quality of the target image frame, and in some scenarios, the image quality parameter may also be a parameter for characterizing the image quality of the first object.
In some alternative embodiments, the image quality parameter may be an image quality score; wherein, in case the image quality score is used for characterizing the image quality of the target image frame, the image quality score may be determined based on the resolution, the definition, etc. of the target image frame; in case the image quality score is used to characterize the image quality of the first object, the image quality score may be determined based on the completeness, set angle, sharpness, etc. of the first object in the image frame.
In an actual implementation scenario, the image quality score may be obtained by detecting the image frame through an image quality detection model. When the target image frame is determined, the image frame with the image quality score larger than the preset threshold value can be determined as the target image frame; the image quality scores may also be sorted, and a preset number of image frames are selected as target image frames according to the sorting of the image quality scores, for example, five image frames with the highest image quality scores are selected as the target image frames, or the image frame with the highest image quality scores is selected as the target image frame, and the like, which is not limited in the embodiment of the present application.
When determining a target image frame in a file to be analyzed, the determination may be performed by performing image recognition on the file to be analyzed.
In some alternative embodiments, the specific process of image recognition may include image detection processing, image tracking processing, image quality detection, and the like.
Specifically, the image detection process is used to detect a first image frame containing a first object in the image frames. As an example, taking the first object as the face of a person (i.e., a human face) as an example, the image detection process detects a first image frame including a face image.
The image tracking process is used to track the first image frame to determine a second image frame in the first image frame that contains the same first object. Still in conjunction with the above example, the image tracking process is used to determine a second image frame in the first image frame that contains the same facial image (i.e., contains the face of the same person).
The image quality detection is used for performing quality detection on the image frame and determining the image quality score of the image frame. For example, the target image frame in the second image frame may be determined by an image quality score obtained by image quality detection.
Step S203: feature recognition is performed on the first object based on the target image frame.
After the target image frame is determined, feature recognition may be performed on the first object based on the target image frame.
In this embodiment, the related attribute of the first object may be determined by performing feature recognition on the first object. For example, taking the first object as a vehicle as an example, the brand, model, and the like of the vehicle can be determined through feature recognition; taking the first object as a product on a production line as an example, the type of the product, the production process, whether the product has defects and the like can be determined through characteristic identification; taking the first object as the face of the person as an example, the identity of the person may be determined by feature recognition, and so on.
In some optional embodiments, when performing feature recognition on the first object, object features of the first object may be extracted based on the target image frame, and then, based on the object features, the first object is recognized or resolved, that is, relevant attributes of the first object are determined.
In other alternative embodiments, the target image frame may be further processed to obtain a higher resolution image frame, and then the first object may be subjected to feature recognition based on the processed image frame. For example, the image processing may include image super-resolution processing, image de-noising processing, and the like.
In the embodiment of the application, image recognition is carried out on the file to be analyzed, a target image frame in the file to be analyzed is determined, and feature recognition is carried out on the first object based on the target image frame; therefore, feature recognition is only needed to be carried out on the first object based on the target image frame, feature recognition is not needed to be carried out on other image frames in the file to be analyzed, on one hand, the calculated amount of feature recognition is reduced, and the feature recognition efficiency is improved, on the other hand, the target image frame is the image frame of which the image quality parameters meet the preset requirements, and therefore feature recognition accuracy can be improved based on the image frame of which the image quality parameters meet the requirements.
In one embodiment of the present application, the performing feature recognition on the first object based on the target image frame includes:
acquiring an object feature of the first object based on the target image frame;
and determining whether the first object is a target object in a preset characteristic range according to the object characteristics.
Specifically, an object feature of the first object may be acquired based on the target image frame. For example, the object feature of the first object may be acquired directly from the target image frame; or performing image super-resolution processing, image denoising processing and the like on the target image frame to obtain a denoised image frame with noise removed and resolution greater than a preset threshold, and then acquiring the object characteristics of the first object from the denoised image frame.
After the object features are obtained, whether the first object is a target object in a preset feature range can be determined according to the object features. As an example, taking the first object as the face of a person as an example, the object features may be facial features such as facial skin color features, facial shape features, eye features, nose features, and the like. Whether the first object is a target object within a preset feature range can be determined according to the facial features. Optionally, the object within the preset feature range may be a facial image already stored in a preset database, for example, in an actual scene, the facial image already stored in the preset database may be a white list object; by comparing the above-mentioned facial features with the white list object, it can be determined whether the first object is a white list object.
Further, in some scenarios, the alert information may be issued when the first object is not a whitelist object.
For example, as shown in fig. 3, by taking a first object as a face (human face) of a person as an example, in a scene where whether a stranger exists in a monitored video image, after a video file is acquired, processing such as human face detection and human face tracking may be performed on the video file, a target image frame including the same human face image is determined, feature recognition is performed on a human face image in the target image frame, features of the human face image are compared with features of a white list object, and when the features of the human face image are not features of the white list object, alarm information may be sent; or if the current state is in the alarm state, the information for eliminating the alarm can be sent out correspondingly.
In an embodiment of the present application, the performing image recognition on the file to be analyzed and determining a target image frame in the file to be analyzed includes:
performing face detection processing on the image frames, and determining a first image frame containing a face image;
performing face tracking processing on the first image frame, and determining a second image frame containing the same face image in the first image frame;
determining a target image frame in the second image frame.
In the embodiment of the application, the file to be analyzed is taken as a video file, and the first object is taken as a face of a person, so that when the video file is subjected to image recognition, the image frames in the video file can be subjected to face detection processing first, and the first image frame containing the face image is detected.
In some alternative embodiments, the face detection process may detect the face of a person in the image frame, and may specifically detect faces from various angles, such as front, side, top, and the like.
Alternatively, the face detection process may be implemented by a face detection model; for example, as shown in fig. 4, the face detection model may adopt a lightweight detection model mobility-retinaface based on a Convolutional Neural Network (CNN). As shown in fig. 5, the core of the mobilene-retinaface is a pyramid structure of FPN features and a lightweight mobilene backbone network, and the core of the mobilene is a deep separable convolution.
After the first image frame is determined by the face detection processing, the first image frame may be subjected to face tracking processing, and a second image frame including the same face image in the first image frame may be determined.
In some optional embodiments, the performing the face tracking process on the first image frame and determining that a second image frame containing the same face image in the first image frame includes:
comparing the position characteristics and the morphological characteristics of the face image in the first image frame, and determining that the first image frame containing the face image is the second image frame under the condition that the position characteristics and the morphological characteristics meet corresponding preset conditions;
wherein the morphological characteristics at least comprise color characteristics and shape characteristics.
Specifically, the second image frame may be determined by comparing the position feature, the morphological feature, and the like of the face image in the first image frame.
The position feature of the face image may be coordinates of the face in the first image frame. In some optional embodiments, comparing the position features of the face image in the first image frame comprises:
predicting the first image frame through a Kalman filtering algorithm to obtain predicted face coordinates in the first image frame;
and comparing the predicted face coordinates with first face coordinates in a subsequent image frame, and determining that the predicted face coordinates and the first face coordinates meet a matching relation.
In an actual implementation scenario, the face tracking process is performed iteratively; as an example, only 2 frames of the first image frame are taken as an example for explanation:
for a first frame image frame, assuming that a face image is detected, the position coordinates of the face image are used as initial position coordinates, and then the position coordinates of the face image in a next frame image frame (i.e., a next image frame in the embodiment of the present application) are predicted through a kalman filter algorithm and a preset movement speed, and the position coordinates are also the predicted face coordinates in the embodiment of the present application.
And for the second frame image frame, supposing that a face image is detected again, comparing the predicted face coordinates with the first face coordinates by the Hungarian algorithm to determine whether the predicted face coordinates meet the matching relation or not, wherein the position coordinates of the face image in the second frame image frame are the first face coordinates. The matching here may be understood as the predicted face coordinates being identical to the first face coordinates, or the difference between the coordinate values of the predicted face coordinates and the coordinate values of the first face coordinates being within a preset difference range.
And when the predicted face coordinates and the first face coordinates meet a preset coordinate relationship, updating Kalman filtering parameters through the first face coordinates of the face image in the second frame image frame.
In some optional embodiments, the comparing the morphological feature of the face image in the first image frame comprises:
and comparing morphological features of the face image in the first image frame, and determining whether the similarity of the morphological features is greater than a first preset threshold.
Specifically, the morphological feature may include a color feature, a shape feature, and the like. Alternatively, the similarity of the morphological features may be determined by a cosine similarity algorithm.
That is to say, in the embodiment of the present application, when the predicted face coordinates and the first face coordinates satisfy the matching relationship, and the similarity of the morphological feature is greater than a first preset threshold, it is determined that the first image frame including the face image (i.e. the same face image) is the second image frame.
In addition, in some optional embodiments, histogram of Oriented Gradient (HOG) features, region over Union (IOU) features, and the like of the face image may also be compared to further determine a second image frame containing the same face image.
In one embodiment of the present application, the determining a target image frame in the second image frame includes:
inputting the second image frames into a quality detection model, and determining an image quality score of each second image frame;
and determining the second image frame with the largest image quality score as the target image frame.
Optionally, in this embodiment of the present application, the image quality score is used to characterize the image quality of the first object, and the image quality score may be determined based on the completeness, the set angle, the definition, and the like of the first object in the image frame. The image quality score can be obtained by detecting the image frame through a quality detection model.
When determining the target image frame, the image quality scores may be sorted, and the image frame with the largest image quality score may be selected as the target image frame.
Further, in some embodiments, the image quality score may also be compared to a quality score threshold to determine the target image frame.
As shown in connection with fig. 6 and 7, the image quality score Q [ c ] of the image frame c may be compared with a quality score threshold H (H may be a larger threshold), and when Q [ c ] is larger than H, the image frame c may be determined as the target image frame.
As another example, in conjunction with FIG. 8, when Q [ c ] is less than H, Q [ c ] may be compared again to the quality score threshold L (L may be a smaller threshold), and when Q [ c ] is less than L, the comparison may be ended and the image frame c eliminated. When qc is greater than L, qc may be compared to qc-1 (i.e., the image quality fraction of c-1 frames), and when qc is greater than qc-1, and qc-1 is greater than qc-1-s, and qc-1-s is greater than qc-1-2 s, and qc-1-2 s is greater than qc-1-3 s (where s is the step size, e.g., s may be 1), qc may be determined to be the largest image quality fraction, i.e., image frame c may be determined to be the target image frame.
In one embodiment of the present application, after determining the target image frame in the second image frame, the method further comprises:
carrying out super-resolution denoising processing on the target image frame to obtain a denoised image frame with noise removed and resolution greater than a second preset threshold;
the acquiring of the object feature of the first object includes:
and acquiring the object characteristics of a first object in the de-noised image frame.
Optionally, in this embodiment of the application, after the target image frame is obtained, the target image frame may be subjected to super-resolution denoising processing. The super-resolution denoising processing can be realized through a super-resolution model.
As shown in fig. 9 and 10, the hyper-resolution model is a multi-layer network constructed by inputting a plurality of scale images, and is similar to a reconstruction pyramid, and the images are optimized step by step, so that a reconstructed clear image is obtained. Wherein, the whole network is divided into three sub-networks with different sizes; the core of each subnetwork is the encoder-decoder architecture; the sub-network 3 carries out denoising processing on an input image B3 to obtain I3, the I3 is amplified and then fused with B2 to be input into the sub-network 2, the sub-network 2 outputs I2, the I2 is amplified and then fused with B3 to be input into the sub-network 1, and finally, the image I1 after superresolution denoising is output.
As an example, the original face image with noise may be subjected to a scaling pre-processing to obtain face images with different sizes, for example, the face images may be B1 (256 ), B2 (128 ), B3 (64, 64); then, the face images B1, B2, and B3 are input to the hyper-segmentation model, and a denoised image of size I1 (256 ) can be obtained.
As shown in connection with fig. 11, in some embodiments, the denoised image frame may be determined from the second image frame by the following process:
firstly, carrying out face alignment processing on a second image frame to obtain a face image with a positive face angle; then, inputting the image frame with the aligned human face into a quality detection model to obtain an image quality score of the image frame; screening out a target image frame with the largest image quality score by comparing the image quality scores; determining whether the target image frame meets a super-resolution condition, for example, the super-resolution condition may be that the image size is larger than (64, 64), and under the condition that the super-resolution condition is met, performing super-resolution denoising processing on the target image frame to obtain a denoised image frame; furthermore, the human face features in the de-noised image frame can be extracted, and the human face features are compared with the human face features in the white list; and when the face image does not belong to the face image in the white list, sending out alarm information. In an actual scene, the face images of the white list may be face images of residential persons and security personnel in a community.
In one embodiment of the present application, in the case where an occlusion region is included in the face image,
the determining whether the first object is a target object within a preset feature range according to the object features includes:
comparing the object features with image features in a simulation database to determine whether the first object is the target object;
wherein the simulation database comprises a face image with a simulation shading area.
Specifically, in an actual scene, the shielding region may be a region shielded by a mask, a region shielded by an eye mask, a region shielded by glasses, a region shielded by other means, and the like.
In the case where an occlusion region is included in the face image, the object features may be compared with image features in the simulation database.
In constructing a face image having a simulated mask area, taking a simulated mask on-wear as an example, a mask on-wear may be simulated for the face image in the database. Specifically, facial key points such as a left ear point, a right ear point, a bridge midpoint, and a chin vertex may be obtained; then calculating the size of the mask according to the key points; calculating the width of the mask according to the distance between the left ear point and the right ear point; calculating the height of the mask according to the distance between the middle point of the nose bridge and the top point of the chin; taking the midpoint of the bridge of the nose and the midpoint of the top of the chin as the central point of the mask; and finely adjusting the angle of the mask according to the connecting line angle of the middle point of the nose bridge and the top point of the chin, and the like.
In some alternative embodiments, if it is not possible to determine whether the face image has an occlusion region, the object feature may be compared with image features in the simulation database, and a database of face images that normally do not have an occlusion region, respectively.
In one embodiment of the present application, the face detection processing and the super-resolution denoising processing are implemented by a face detection model and a super-resolution model, respectively;
the method further comprises the following steps:
and carrying out quantization processing on the initial face detection model, the initial hyper-resolution model and the initial quality detection model to obtain the face detection model, the hyper-resolution model and the quality detection model with the accuracy greater than a third preset threshold and the prediction speed greater than a fourth preset threshold.
The embodiment of the application also comprises a step of carrying out quantitative processing on the face detection model, the hyper-resolution model and the quality detection model. Alternatively, the quantization process may employ automatic mixed precision quantization.
As shown in fig. 12, the third preset threshold may be set to 99%, and the fourth preset threshold may be set to 25%. The specific quantization processing step is, for example, firstly, quantizing to obtain an initial quantization model by adopting INT8 quantization precision; automatically reading a data set, and evaluating the accuracy and the prediction speed of the initial quantization model; if the accuracy and the prediction speed meet the index requirements, returning a result; otherwise, performing layer-by-layer error analysis, and selecting the layer with the largest error to promote the quantization precision, for example, promoting the quantization precision from INT8 to INT16, promoting the quantization precision from INT16 to FP16, promoting the quantization precision from FP16 to FP32, and the like. And after a new quantization model is obtained, automatically reading the data set, evaluating the accuracy and the prediction speed of the new quantization model, and repeating quantization until the accuracy and the prediction speed reach preset standards, namely the accuracy is more than 99 percent and the prediction speed is more than 25 percent.
In the embodiment of the application, image recognition is carried out on the file to be analyzed, a target image frame in the file to be analyzed is determined, and feature recognition is carried out on the first object based on the target image frame; therefore, feature recognition is only needed to be carried out on the first object based on the target image frame, feature recognition is not needed to be carried out on other image frames in the file to be analyzed, on one hand, the calculated amount of feature recognition is reduced, and the feature recognition efficiency is improved, on the other hand, the target image frame is the image frame with the image quality parameters meeting the preset requirements, and therefore feature recognition is carried out on the image frame with the image quality parameters meeting the requirements, and the accuracy of the feature recognition can be improved.
An embodiment of the present application provides an identification apparatus, as shown in fig. 13, where the identification apparatus 130 may include: a file acquisition module 1301, a determination module 1302, and an identification module 1303, wherein,
the file obtaining module 1301 is configured to obtain a file to be analyzed, where the file to be analyzed is a multimedia file;
a determining module 1302, configured to perform image recognition on the file to be analyzed, and determine a target image frame in the file to be analyzed; the target image frame is an image frame containing a first object, and the image frame with the image quality parameter meeting the preset requirement;
and an identifying module 1303, configured to perform feature identification on the first object based on the target image frame.
In an embodiment of the application, the identification module is specifically configured to obtain an object feature of the first object based on the target image frame;
and determining whether the first object is a target object in a preset characteristic range according to the object characteristics.
In an embodiment of the application, the determining module is specifically configured to perform a face detection process on the image frames, determine a first image frame containing a face image;
performing face tracking processing on the first image frame, and determining a second image frame containing the same face image in the first image frame;
determining a target image frame in the second image frame.
In an embodiment of the application, the determining module is specifically configured to compare a position feature and a morphological feature of a facial image in the first image frame, and determine that the first image frame including the facial image is the second image frame when the position feature and the morphological feature meet corresponding preset conditions;
wherein the morphological characteristics at least comprise color characteristics and shape characteristics.
In an embodiment of the application, the determining module is specifically configured to perform prediction processing on the first image frame through a kalman filtering algorithm, so as to obtain predicted face coordinates in the first image frame;
and comparing the predicted face coordinates with first face coordinates in a subsequent image frame, and determining that the predicted face coordinates and the first face coordinates meet a matching relation.
In an embodiment of the application, the determining module is specifically configured to compare morphological features of the face image in the first image frame, and determine whether a similarity of the morphological features is greater than a first preset threshold.
In an embodiment of the application, the determining module is specifically configured to input the second image frames into a quality detection model, and determine an image quality score for each of the second image frames;
and determining the second image frame with the largest image quality score as the target image frame.
In one embodiment of the present application, the apparatus further comprises a super-divide module for, after said determining a target image frame in the second image frames,
carrying out super-resolution denoising processing on the target image frame to obtain a denoised image frame with noise removed and resolution greater than a second preset threshold;
the identification module is specifically configured to acquire an object feature of a first object in the denoised image frame.
In one embodiment of the present application, in the case where an occlusion region is included in the face image,
the identification module is specifically configured to compare the object features with image features in a simulation database, and determine whether the first object is the target object;
wherein the simulation database comprises a face image with a simulation shading area.
In one embodiment of the present application, the face detection processing and the super-resolution denoising processing are implemented by a face detection model and a super-resolution model, respectively;
the device further comprises a quantification module, wherein the quantification module is used for quantifying the initial face detection model, the initial hyper-resolution model and the initial quality detection model to obtain the face detection model, the hyper-resolution model and the quality detection model, the accuracy of which is greater than a third preset threshold value, and the prediction speed of which is greater than a fourth preset threshold value.
The apparatus of the embodiment of the present application may execute the method provided by the embodiment of the present application, and the implementation principle is similar, the actions executed by the modules in the apparatus of the embodiments of the present application correspond to the steps in the method of the embodiments of the present application, and for the detailed functional description of the modules of the apparatus, reference may be specifically made to the description in the corresponding method shown in the foregoing, and details are not repeated here.
In the embodiment of the application, image recognition is carried out on the file to be analyzed, a target image frame in the file to be analyzed is determined, and feature recognition is carried out on the first object based on the target image frame; therefore, feature recognition is only needed to be carried out on the first object based on the target image frame, feature recognition is not needed to be carried out on other image frames in the file to be analyzed, on one hand, the calculated amount of feature recognition is reduced, and the feature recognition efficiency is improved, on the other hand, the target image frame is the image frame with the image quality parameters meeting the preset requirements, and therefore feature recognition is carried out on the image frame with the image quality parameters meeting the requirements, and the accuracy of the feature recognition can be improved.
An embodiment of the present application provides an electronic device, including: a memory and a processor; at least one program stored in the memory for execution by the processor, which when executed by the processor, implements: in the embodiment of the application, image recognition is carried out on the file to be analyzed, a target image frame in the file to be analyzed is determined, and feature recognition is carried out on the first object based on the target image frame; therefore, feature recognition is only needed to be carried out on the first object based on the target image frame, feature recognition is not needed to be carried out on other image frames in the file to be analyzed, on one hand, the calculated amount of feature recognition is reduced, and the feature recognition efficiency is improved, on the other hand, the target image frame is the image frame of which the image quality parameters meet the preset requirements, and therefore feature recognition accuracy can be improved based on the image frame of which the image quality parameters meet the requirements.
In an alternative embodiment, there is provided an electronic apparatus, as shown in fig. 14, an electronic apparatus 4000 shown in fig. 14 including: a processor 4001 and a memory 4003. Processor 4001 is coupled to memory 4003, such as via bus 4002. Optionally, the electronic device 4000 may further include a transceiver 4004, and the transceiver 4004 may be used for data interaction between the electronic device and other electronic devices, such as transmission of data and/or reception of data. It should be noted that the transceiver 4004 is not limited to one in practical applications, and the structure of the electronic device 4000 is not limited to the embodiment of the present application.
The Processor 4001 may be a CPU (Central Processing Unit), a general-purpose Processor, a DSP (Digital Signal Processor), an ASIC (Application Specific Integrated Circuit), an FPGA (Field Programmable Gate Array) or other Programmable logic device, a transistor logic device, a hardware component, or any combination thereof. Which may implement or perform the various illustrative logical blocks, modules, and circuits described in connection with the disclosure. The processor 4001 may also be a combination that performs a computational function, including, for example, a combination of one or more microprocessors, a combination of a DSP and a microprocessor, or the like.
Bus 4002 may include a path that carries information between the aforementioned components. The bus 4002 may be a PCI (Peripheral Component Interconnect) bus, an EISA (Extended Industry Standard Architecture) bus, or the like. The bus 4002 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in FIG. 14, but that does not indicate only one bus or one type of bus.
The Memory 4003 may be a ROM (Read Only Memory) or other types of static storage devices that can store static information and instructions, a RAM (Random Access Memory) or other types of dynamic storage devices that can store information and instructions, an EEPROM (Electrically Erasable Programmable Read Only Memory), a CD-ROM (Compact Disc Read Only Memory) or other optical Disc storage, optical Disc storage (including Compact Disc, laser Disc, optical Disc, digital versatile Disc, blu-ray Disc, etc.), a magnetic Disc storage medium or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to these.
The memory 4003 is used for storing application program codes (computer programs) for executing the present scheme, and is controlled by the processor 4001 to execute. Processor 4001 is configured to execute application code stored in memory 4003 to implement what is shown in the foregoing method embodiments.
Wherein, the electronic device includes but is not limited to: mobile phones, notebook computers, multimedia players, desktop computers, and the like.
The present application provides a computer-readable storage medium, on which a computer program is stored, which, when running on a computer, enables the computer to execute the corresponding content in the foregoing method embodiments.
In the embodiment of the application, image recognition is carried out on the file to be analyzed, a target image frame in the file to be analyzed is determined, and feature recognition is carried out on the first object based on the target image frame; therefore, feature recognition is only needed to be carried out on the first object based on the target image frame, feature recognition is not needed to be carried out on other image frames in the file to be analyzed, on one hand, the calculated amount of feature recognition is reduced, and the feature recognition efficiency is improved, on the other hand, the target image frame is the image frame of which the image quality parameters meet the preset requirements, and therefore feature recognition accuracy can be improved based on the image frame of which the image quality parameters meet the requirements.
The terms "first," "second," "third," "fourth," "1," "2," and the like in the description and claims of this application and in the preceding drawings, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used are interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in other sequences than illustrated or otherwise described herein.
It should be understood that, although each operation step is indicated by an arrow in the flowchart of the embodiment of the present application, the implementation order of the steps is not limited to the order indicated by the arrow. In some implementation scenarios of the embodiments of the present application, the implementation steps in the flowcharts may be performed in other sequences as desired, unless explicitly stated otherwise herein. In addition, some or all of the steps in each flowchart may include multiple sub-steps or multiple stages based on an actual implementation scenario. Some or all of these sub-steps or stages may be performed at the same time, or each of these sub-steps or stages may be performed at different times, respectively. In a scenario where execution times are different, an execution sequence of the sub-steps or the phases may be flexibly configured according to requirements, which is not limited in the embodiment of the present application.
The foregoing is only an optional implementation manner of a part of implementation scenarios in this application, and it should be noted that, for those skilled in the art, other similar implementation means based on the technical idea of this application are also within the protection scope of the embodiments of this application without departing from the technical idea of this application.

Claims (13)

1. An identification method, comprising:
acquiring a file to be analyzed, wherein the file to be analyzed is a multimedia file;
performing image recognition on the file to be analyzed, and determining a target image frame in the file to be analyzed; the target image frame is an image frame containing a first object, and the image quality parameter of the image frame meets the preset requirement;
feature recognition is performed on the first object based on the target image frame.
2. The identification method according to claim 1, wherein said performing feature identification on the first object based on the target image frame comprises:
acquiring an object feature of the first object based on the target image frame;
and determining whether the first object is a target object in a preset characteristic range or not according to the object characteristics.
3. The identification method according to claim 1, wherein the image identification of the file to be analyzed and the determination of the target image frame in the file to be analyzed comprise:
performing face detection processing on the image frames, and determining a first image frame containing a face image;
performing face tracking processing on the first image frame, and determining a second image frame containing the same face image in the first image frame;
determining a target image frame in the second image frame.
4. The identification method according to claim 1, wherein said performing face tracking processing on the first image frame and determining a second image frame containing the same face image in the first image frame comprises:
comparing the position characteristics and the morphological characteristics of the face image in the first image frame, and determining that the first image frame containing the face image is the second image frame under the condition that the position characteristics and the morphological characteristics meet corresponding preset conditions;
wherein the morphological characteristics at least comprise color characteristics and shape characteristics.
5. The method according to claim 4, wherein comparing the position features of the face image in the first image frame comprises:
predicting the first image frame through a Kalman filtering algorithm to obtain predicted face coordinates in the first image frame;
and comparing the predicted face coordinates with first face coordinates in a subsequent image frame, and determining that the predicted face coordinates and the first face coordinates meet a matching relation.
6. The identification method according to claim 4, wherein comparing the morphological feature of the face image in the first image frame comprises:
and comparing morphological features of the face image in the first image frame, and determining whether the similarity of the morphological features is greater than a first preset threshold.
7. The identification method according to claim 3, wherein said determining a target image frame in said second image frame comprises:
inputting the second image frames into a quality detection model, and determining an image quality score of each second image frame;
and determining the second image frame with the largest image quality score as the target image frame.
8. The identification method according to claim 3, wherein after said determining a target image frame in said second image frames, said method further comprises:
carrying out super-resolution denoising processing on the target image frame to obtain a denoised image frame with noise removed and resolution greater than a second preset threshold;
the acquiring of the object feature of the first object includes:
and acquiring the object characteristics of a first object in the de-noised image frame.
9. The recognition method according to claim 2, wherein in a case where an occlusion region is included in the face image,
the determining whether the first object is a target object within a preset feature range according to the object features includes:
comparing the object features with image features in a simulation database to determine whether the first object is the target object;
wherein the simulation database comprises a face image with a simulation shading area.
10. The recognition method according to claim 1, wherein the face detection process and the hyper-resolution denoising process are implemented by a face detection model and a hyper-resolution model, respectively;
the method further comprises the following steps:
and carrying out quantization processing on the initial face detection model, the initial hyper-resolution model and the initial quality detection model to obtain the face detection model, the hyper-resolution model and the quality detection model with the accuracy greater than a third preset threshold and the prediction speed greater than a fourth preset threshold.
11. An identification device, comprising:
the file acquisition module is used for acquiring a file to be analyzed, wherein the file to be analyzed is a multimedia file;
the determining module is used for carrying out image recognition on the file to be analyzed and determining a target image frame in the file to be analyzed; the target image frame is an image frame containing a first object, and the image frame with the image quality parameter meeting the preset requirement;
and the identification module is used for carrying out characteristic identification on the first object based on the target image frame.
12. An electronic device comprising a memory, a processor and a computer program stored on the memory, characterized in that the processor executes the computer program to implement the steps of the identification method according to any of claims 1-10.
13. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the identification method according to any one of claims 1 to 10.
CN202211512753.1A 2022-11-28 2022-11-28 Identification method, identification device, electronic equipment and computer-readable storage medium Pending CN115937766A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211512753.1A CN115937766A (en) 2022-11-28 2022-11-28 Identification method, identification device, electronic equipment and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211512753.1A CN115937766A (en) 2022-11-28 2022-11-28 Identification method, identification device, electronic equipment and computer-readable storage medium

Publications (1)

Publication Number Publication Date
CN115937766A true CN115937766A (en) 2023-04-07

Family

ID=86697160

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211512753.1A Pending CN115937766A (en) 2022-11-28 2022-11-28 Identification method, identification device, electronic equipment and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN115937766A (en)

Similar Documents

Publication Publication Date Title
US11830230B2 (en) Living body detection method based on facial recognition, and electronic device and storage medium
WO2021139324A1 (en) Image recognition method and apparatus, computer-readable storage medium and electronic device
CN111754396B (en) Face image processing method, device, computer equipment and storage medium
CN111104925B (en) Image processing method, image processing apparatus, storage medium, and electronic device
CN111652181B (en) Target tracking method and device and electronic equipment
CN112836625A (en) Face living body detection method and device and electronic equipment
CN111488810A (en) Face recognition method and device, terminal equipment and computer readable medium
CN114973057B (en) Video image detection method and related equipment based on artificial intelligence
García-González et al. Background subtraction by probabilistic modeling of patch features learned by deep autoencoders
CN112036381A (en) Visual tracking method, video monitoring method and terminal equipment
CN115082966A (en) Pedestrian re-recognition model training method, pedestrian re-recognition method, device and equipment
CN115577768A (en) Semi-supervised model training method and device
CN111784658A (en) Quality analysis method and system for face image
CN111144425A (en) Method and device for detecting screen shot picture, electronic equipment and storage medium
CN111126250A (en) Pedestrian re-identification method and device based on PTGAN
CN115294162B (en) Target identification method, device, equipment and storage medium
CN116884031A (en) Artificial intelligence-based cow face recognition method and device
CN116958033A (en) Abnormality detection method, model training method, device, equipment and medium
CN116977247A (en) Image processing method, device, electronic equipment and storage medium
CN115937766A (en) Identification method, identification device, electronic equipment and computer-readable storage medium
CN115131384A (en) Bionic robot 3D printing method, device and medium based on edge preservation
CN116883770A (en) Training method and device of depth estimation model, electronic equipment and storage medium
CN114049676A (en) Fatigue state detection method, device, equipment and storage medium
CN114677611A (en) Data identification method, storage medium and device
CN110751034A (en) Pedestrian behavior identification method and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination