CN113554740A - Mask customizing method and device based on artificial intelligence and storage medium - Google Patents

Mask customizing method and device based on artificial intelligence and storage medium Download PDF

Info

Publication number
CN113554740A
CN113554740A CN202010328855.2A CN202010328855A CN113554740A CN 113554740 A CN113554740 A CN 113554740A CN 202010328855 A CN202010328855 A CN 202010328855A CN 113554740 A CN113554740 A CN 113554740A
Authority
CN
China
Prior art keywords
user
face
data
dimensional
mask
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010328855.2A
Other languages
Chinese (zh)
Inventor
莫若理
赵磊
诸晓明
陈建军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chison Medical Technologies Co ltd
Wuxi Chison Medical Technologies Co Ltd
Original Assignee
Chison Medical Technologies Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chison Medical Technologies Co ltd filed Critical Chison Medical Technologies Co ltd
Priority to CN202010328855.2A priority Critical patent/CN113554740A/en
Publication of CN113554740A publication Critical patent/CN113554740A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/04Manufacturing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2012Colour editing, changing, or manipulating; Use of colour codes
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Business, Economics & Management (AREA)
  • Computer Graphics (AREA)
  • Tourism & Hospitality (AREA)
  • Economics (AREA)
  • Strategic Management (AREA)
  • Marketing (AREA)
  • Human Resources & Organizations (AREA)
  • General Business, Economics & Management (AREA)
  • General Health & Medical Sciences (AREA)
  • Primary Health Care (AREA)
  • Architecture (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Manufacturing & Machinery (AREA)
  • Geometry (AREA)
  • Processing Or Creating Images (AREA)

Abstract

Mask customizing method and device based on artificial intelligence and storage medium. The invention relates to the technical field of artificial intelligence, and particularly discloses a mask customizing method based on artificial intelligence, which comprises the following steps: acquiring face data of a user, wherein the face data at least comprises face depth information and face color information; acquiring a model by using human face characteristics based on the face data of the user to obtain human face three-dimensional point cloud data of the user; obtaining human face three-dimensional grid data of the user by using a human face three-dimensional imaging model at least based on the human face three-dimensional point cloud data; obtaining mask parameters corresponding to the user at least based on the human face three-dimensional grid data of the user, wherein the mask parameters at least comprise: shape parameters, size parameters.

Description

Mask customizing method and device based on artificial intelligence and storage medium
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a mask customizing method and device based on artificial intelligence and a storage medium.
Background
The existing masks used by people have the defects that the shapes, the sizes and the like of the masks are fixed and not suitable for some users, so that the wearing difficulty of the masks is caused, for example, the common masks on the market can strangle the faces of the users or do not fit the faces of the users after being worn for a long time, and the users feel uncomfortable.
Disclosure of Invention
The invention provides a mask customizing method, a device and a storage medium based on artificial intelligence, which aim to generate mask parameters according to the facial features of a specific user and solve the problem that the mask is not suitable for the face when the user wears the mask.
One embodiment of the invention provides a mask customizing method based on artificial intelligence, which comprises the following steps: acquiring face data of a user, wherein the face data at least comprises face depth information and face color information; acquiring a model by using human face characteristics based on the face data of the user to obtain human face three-dimensional point cloud data of the user; obtaining human face three-dimensional grid data of the user by using a human face three-dimensional imaging model at least based on the human face three-dimensional point cloud data; and obtaining mask parameters corresponding to the user at least based on the three-dimensional grid data of the face of the user.
In some embodiments, the obtaining facial data of the user comprises: acquiring face depth information of the user by using an infrared sensor; acquiring facial color information of the user by using an optical sensor; and obtaining the face data of the user at least based on the face depth information of the user and the face color information of the user.
In some embodiments, the obtaining, based on the face data of the user, the three-dimensional point cloud data of the face of the user by using a face feature acquisition model includes: processing the face depth information of the user by using a first convolution network to obtain first face depth information; processing the face color information of the user by utilizing a second convolution network to obtain first face color information; merging the first face depth information and the first face color information to obtain first processed information; and processing the first processed information by utilizing a third convolution network to obtain three-dimensional point cloud data corresponding to the face data of the user.
In some embodiments, the obtaining, based on at least the face three-dimensional point cloud data, face three-dimensional mesh data of the user by using a face three-dimensional imaging model includes: processing the human face three-dimensional point cloud data by using a point cloud smoothing network to obtain smoothed human face three-dimensional point cloud data; and utilizing a connection relation reconstruction network to process the smoothed human face three-dimensional point cloud data to obtain human face three-dimensional grid data of the user.
In some embodiments, the obtaining mask parameters corresponding to the user based on at least the three-dimensional mesh data of the face of the user further includes: based on the human face three-dimensional grid data of the user, three-dimensional printing is carried out by using a preset modeling method to obtain a human face three-dimensional model of the user; and obtaining mask parameters corresponding to the user based on the human face three-dimensional model of the user.
In some embodiments, further comprising: based on the human face three-dimensional grid data of the user, three-dimensional printing is carried out by using a preset modeling method to obtain a human face three-dimensional model of the user; and obtaining mask parameters corresponding to the user based on the human face three-dimensional model of the user.
In some embodiments, mask parameters corresponding to a user are obtained based on user-defined parameters of the user and the three-dimensional mesh data of the face of the user.
In some embodiments, the mask parameters include at least: shape parameters, size parameters.
In some embodiments, the mask parameters further include a position parameter indicating a relative positional relationship between the respective components of the mask.
One embodiment of the invention provides a mask customizing method based on artificial intelligence, which comprises the following steps: based on preset user classification, acquiring the face data of users in different classes, wherein each class comprises the face data of a plurality of users, and the user classification standard at least comprises: gender information, age information; based on the face data of the multiple users of each user category, obtaining a model by using face features to obtain the three-dimensional point cloud data of the faces of the multiple users of each category; obtaining human face three-dimensional grid data of a plurality of users of each user category by using a human face three-dimensional imaging model at least based on human face three-dimensional point cloud data of the plurality of users of each user category; carrying out mean value processing on vertex information of the face three-dimensional grid data of the plurality of users in each user category to obtain face three-dimensional grid data corresponding to each user category; and obtaining mask parameters corresponding to the user categories at least based on the three-dimensional grid data of the face corresponding to each user category.
One embodiment of the present invention provides an artificial intelligence based mask customizing device, which includes a processor and a memory, wherein the memory stores at least one program instruction, and the processor loads and executes the at least one program instruction to implement the artificial intelligence based mask customizing method as described above.
One embodiment of the present invention provides a storage medium, wherein at least one program instruction is stored in the storage medium, and the at least one program instruction is loaded and executed by a processor to implement the artificial intelligence based mask customizing method as described above.
The mask customizing method based on artificial intelligence provided by the invention obtains the face data of a plurality of users. The three-dimensional grid data of the user are obtained by an artificial intelligence method, and the mask suitable for the corresponding user is generated in a targeted mode through the three-dimensional grid data of the user, so that the comfort level of the user using the mask is improved, and the mask is more humanized.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention.
Fig. 1 is a flowchart illustrating an artificial intelligence based mask customizing method 100 according to an embodiment of the present invention.
Fig. 2 is a flowchart of a method 200 for acquiring a face feature acquisition model according to an embodiment of the present invention.
Fig. 3 is a flowchart of a method 300 for acquiring three-dimensional point cloud data of a human face according to an embodiment of the present invention.
Fig. 4 is a flowchart of a method 400 for acquiring a three-dimensional imaging model of a human face according to an embodiment of the present invention.
Fig. 5 is a flowchart of a method 500 for acquiring face three-dimensional mesh data according to an embodiment of the present invention.
Fig. 6 is a flowchart of an artificial intelligence based mask customizing method 600 according to an embodiment of the present invention.
Fig. 7 is a block diagram illustrating an artificial intelligence based mask customizing apparatus 700 according to an embodiment of the present invention.
Detailed Description
It should be noted that the embodiments and features of the embodiments may be combined with each other without conflict. The present invention will be described in detail below with reference to the embodiments with reference to the attached drawings.
In order to make those skilled in the art better understand the technical solution of the present invention, the technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged under appropriate circumstances in order to facilitate the description of the embodiments of the invention herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In an embodiment of the present invention, there is provided an artificial intelligence based mask customizing method, as shown in fig. 1, the method 100 may include:
step 110: the face data of the user may be obtained.
In some embodiments, the obtaining facial data of the user may include: acquiring face depth information of the user by using an infrared sensor; acquiring facial color information of the user by using an optical sensor; and obtaining the face data of the user at least based on the face depth information of the user and the face color information of the user.
In some embodiments, infrared sensors and optical sensors may be utilized to obtain face depth information and face color information, respectively, of a particular user, based on which face data of the particular user is obtained. The face depth information includes distance information from the camera to the face of the user, and the face color information is image data of the face of the user photographed by the camera
Step 120: and acquiring a model by using the face characteristics according to the face data of the user to obtain the three-dimensional point cloud data of the face of the user.
In some embodiments, the human face feature acquisition model is a deep learning model, the input data of the model is the face data of the user, and the output data is the three-dimensional point cloud data of the human face of the user. For the face feature acquisition model, reference may be made to the related description of fig. 2, and details are not repeated here.
In some embodiments, the three-dimensional point cloud data of the human face may be represented in the form of points, and each three-dimensional point cloud data includes three-dimensional position information, and the three-dimensional position information may be represented by three-dimensional coordinates. In some embodiments, the facial three-dimensional point cloud data may further include color information, which may include pixel information of the three-dimensional point cloud data.
Step 130: and obtaining the human face three-dimensional grid data of the user by utilizing a human face three-dimensional imaging model at least based on the human face three-dimensional point cloud data.
In some embodiments, the human face three-dimensional imaging model is a deep learning model, input data of the model is human face three-dimensional point cloud data of a user, and output data of the model is human face three-dimensional grid data of the user. For the face feature acquisition model, reference may be made to the related description of fig. 4, which is not described herein again.
In some embodiments, the face three-dimensional mesh data may be represented by a graph structure, which may include: points, edges, and faces.
Step 140: and obtaining mask parameters corresponding to the user at least based on the three-dimensional grid data of the face of the user.
In some embodiments, the mask parameters may include, but are not limited to, shape parameters, size parameters, and position parameters. The shape parameters may include, but are not limited to, spherical, parabolic, toroidal, and the like. The size parameter may include, but is not limited to, a size value corresponding to a specific component of the mask, and it should be noted that the size value may include, but is not limited to, a unit of micron, millimeter, centimeter, etc., and the position parameter may include, but is not limited to, position information in the whole mask for a specific shape, and the position information may be expressed in the form of coordinates, for example, cartesian coordinates, etc. The position parameter is a relative position relationship between various components in the mask, and the components in the mask may include but are not limited to: one of an ear band, a nose bridge strip patch, a nose line, a mask body and a breather valve.
In some embodiments, further comprising: based on the human face three-dimensional grid data of the user, three-dimensional printing is carried out by using a preset modeling method to obtain a human face three-dimensional model of the user; and obtaining mask parameters corresponding to the user based on the human face three-dimensional model of the user.
In some embodiments, three-dimensional printing software can be further used for directly generating a three-dimensional printed mask based on the three-dimensional grid data of the face of the user. Through the three-dimensional printing technology, the mask suitable for the specific user can be generated in a personalized mode based on the acquired face data of the specific user, so that the mask can be worn more conveniently and comfortably, and the positivity of the user in wearing the mask is improved.
In some embodiments, the mask parameters corresponding to the user may be obtained based on user-defined parameters of the user and the three-dimensional mesh data of the face of the user.
Specifically, for the current common mask, the user may not be able to select a mask meeting the requirement according to some special conditions of the user, for example, when a bump appears on the face of the user, if the user still selects the common mask, the user may press the bump to cause discomfort to the user. For another example, in summer, the user needs not to tightly wrap the mask on the face of the user, and in winter, the user needs to tightly wrap the mask on the face of the user, so that the mask parameters meeting the user requirements can be obtained by setting the user-defined parameters and combining the user-defined parameters and the human face three-dimensional grid data of the user, and the requirements of individuation and customization of the user are met. In some embodiments, the user-defined parameters may include shape parameters, size parameters, and location parameters, among others.
As shown in fig. 2, the method 200 for acquiring a facial feature acquisition model may include:
step 210: the method comprises the steps of obtaining a first training set, wherein the first training set comprises a plurality of sample face data and mark data, each sample face data comprises first sample depth information and first sample color information, and the mark data are three-dimensional point cloud data corresponding to the sample face data.
Step 220: and training an initial model by using the first training set to obtain a human face feature acquisition model.
In some embodiments, as shown in fig. 3, the method 300 of obtaining the three-dimensional point cloud data of the face of the user may include:
step 310: and processing the face depth information of the user by using a first convolution network to obtain first face depth information.
In some embodiments, the first convolution network may be a CNN neural network, and the first face depth information may be obtained by processing the face depth information of the user using the first convolution network.
Step 320: and processing the face color information of the user by utilizing a second convolution network to obtain first color information.
In some embodiments, the second convolutional network may be a CNN neural network, and the first face color information may be obtained by processing the face color information of the user using the second convolutional network.
Step 330: and merging the first face depth information and the first face color information to obtain first processed information.
Step 340: and processing the first processed information by utilizing a third convolution network to obtain three-dimensional point cloud data corresponding to the face data of the user.
In some embodiments, the third convolutional network may be a CNN neural network, and the three-dimensional point cloud data corresponding to the face data of the user may be obtained by processing the first processed information of the user using the third convolutional network.
In some embodiments, the face feature acquisition model may include a first convolutional network, a second convolutional network, and a third convolutional network.
In some embodiments, as shown in fig. 4, the method 400 for acquiring the three-dimensional imaging model of the human face may include:
step 410: and acquiring a second training set, wherein the second training set comprises a plurality of sample human face three-dimensional point cloud data and second marking data, and the second marking data are sample human face three-dimensional grid data corresponding to the sample human face three-dimensional point cloud data.
Step 420: and training the initial model by using the second training set to obtain a human face three-dimensional imaging model.
In some embodiments, the method 500 of obtaining three-dimensional mesh data of a face of the user may include:
step 510: and processing the human face three-dimensional point cloud data by using a point cloud smoothing network to obtain the human face three-dimensional point cloud data after smoothing.
In some embodiments, the point cloud smoothing network may be a deep learning model, which may consist of CNNs.
Step 520: and utilizing a connection relation reconstruction network to process the smoothed human face three-dimensional point cloud data to obtain human face three-dimensional grid data of the user.
In some embodiments, the connection relation reconstruction network may be a deep learning model, which may be composed of CNNs.
In an embodiment of the present invention, an artificial intelligence based mask customizing method is provided, which may customize a mask suitable for a specific population for a specific type of user, and the method 600 may include:
step 610: based on preset user classification criteria, acquiring face data of users of different categories, wherein each category comprises face data of a plurality of users, and the user classification criteria at least can include: gender information, age information.
In some embodiments, the population may be classified based on user classification criteria, such that a mask suitable for a specific population may be customized, for example, the population may be classified into elderly people, adults, children, etc. according to age information, and further, for example, the population may be classified into males, females, etc. according to gender information.
Step 620: and obtaining the human face three-dimensional point cloud data of the plurality of users of each category by utilizing a human face feature acquisition model based on the human face data of the plurality of users of each category.
In some embodiments, the face feature acquisition model may be used to process the face data of multiple users in each category, for example, 1000 pieces of face data of a child may be collected and processed by using the face feature model to obtain 1000 pieces of face three-dimensional point cloud data of the child. The face feature model may refer to the related contents in fig. 2 and fig. 3, and will not be described herein again.
Step 630: and obtaining the human face three-dimensional grid data of the plurality of users of each user category by using a human face three-dimensional imaging model at least based on the human face three-dimensional point cloud data of the plurality of users of each user category.
In some embodiments, the face three-dimensional point cloud data of the multiple users of each category may be processed by using a face three-dimensional imaging model, for example, 1000 pieces of face three-dimensional point cloud data of children may be processed by using the face three-dimensional imaging model to obtain 1000 pieces of face three-dimensional mesh data of children. The three-dimensional imaging model of the human face may refer to the related contents in fig. 4 and fig. 5, and details are not repeated here.
Step 640: and carrying out mean value processing on the vertex information of the face three-dimensional grid data of the plurality of users in each user category to obtain the face three-dimensional grid data corresponding to each user category.
In some embodiments, the obtained vertex information of the plurality of face three-dimensional mesh data of each user category may be averaged, for example, the vertex coordinate information corresponding to the 1000 pieces of face three-dimensional mesh data of the child obtained in step 630 may be averaged, so as to obtain the face three-dimensional mesh data corresponding to the child.
Step 650: and obtaining mask parameters corresponding to the user categories at least based on the three-dimensional grid data of the face corresponding to each user category.
In some embodiments, three-dimensional printing can be performed by using a preset modeling method based on the face three-dimensional grid data corresponding to the user category to obtain a face three-dimensional model of the user category; obtaining mask parameters corresponding to the user category based on the human face three-dimensional model of the user category, wherein the mask parameters at least comprise: a shape parameter, a size parameter, and a position parameter.
Based on the embodiment of the invention, the human face three-dimensional grid data of the specific population can be obtained by obtaining the human face data of the specific population and processing the human face characteristic obtaining model and the human face three-dimensional imaging model, so that the mask parameters suitable for the specific population are obtained based on the human face three-dimensional grid data of the specific population through a three-dimensional printing technology, and the mask meeting the requirements of the specific population is further met.
As shown in fig. 7, an embodiment of the present invention provides an artificial intelligence-based mask customizing device, which may include: at least one processor 710, such as a CPU (Central Processing Unit), at least one communication interface 730, memory 740, and at least one communication bus 720. Wherein a communication bus 720 is used to enable connective communication between these components. The communication interface 730 may include a Display screen (Display) and a Keyboard (Keyboard), and the optional communication interface 730 may further include a standard wired interface and a standard wireless interface. The Memory 740 may be a Random Access Memory (RAM) or a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. The memory 730 may optionally be at least one memory device located remotely from the processor 710. Wherein processor 710 invokes program code stored in memory 740 for performing any of the method steps described above.
The communication bus 720 may be a Peripheral Component Interconnect (PCI) bus or an Extended Industry Standard Architecture (EISA) bus. The communication bus 720 may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 7, but this is not intended to represent only one bus or type of bus.
The memory 740 may include a volatile memory (RAM), such as a random-access memory (RAM); the memory may also include a non-volatile memory (english: non-volatile memory), such as a flash memory (english: flash memory), a hard disk (english: hard disk drive, abbreviated: HDD) or a solid-state drive (english: SSD); the memory 54 may also comprise a combination of the above types of memories.
The processor 710 may be a Central Processing Unit (CPU), a Network Processor (NP), or a combination of a CPU and an NP.
The processor 710 may further include a hardware chip. The hardware chip may be an application-specific integrated circuit (ASIC), a Programmable Logic Device (PLD), or a combination thereof. The PLD may be a Complex Programmable Logic Device (CPLD), a field-programmable gate array (FPGA), a General Array Logic (GAL), or any combination thereof.
Optionally, memory 740 is also used to store program instructions. Processor 710 may invoke program instructions to implement the artificial intelligence based mask customization method as shown in the embodiments of fig. 1-6 of the present application.
One embodiment of the present invention provides a storage medium, wherein at least one program instruction is stored in the storage medium, and the at least one program instruction is loaded and executed by a processor to implement the artificial intelligence based mask customizing method as described above.
It will be understood that the above embodiments are merely exemplary embodiments taken to illustrate the principles of the present invention, which is not limited thereto. It will be apparent to those skilled in the art that various modifications and improvements can be made without departing from the spirit and substance of the invention, and these modifications and improvements are also considered to be within the scope of the invention.

Claims (10)

1. An artificial intelligence based mask customizing method, comprising:
obtaining face data of a user, the face data including at least face depth information and face color information
Information;
obtaining the face of the user by utilizing a face feature acquisition model based on the face data of the user
Three-dimensional point cloud data;
obtaining the user's data by using a human face three-dimensional imaging model based on at least the human face three-dimensional point cloud data
Human face three-dimensional grid data;
and obtaining mask parameters corresponding to the user at least based on the three-dimensional grid data of the face of the user.
2. The method of claim 1, wherein the obtaining facial data of the user,
the method comprises the following steps:
acquiring face depth information of the user by using an infrared sensor;
acquiring facial color information of the user by using an optical sensor;
obtaining the user based on at least the face depth information and the face color information of the user
The face data of (1).
3. The method of claim 1, wherein the number of faces based on the user is determined by the user
According to the method, a model is obtained by utilizing the face characteristics to obtain the face three-dimensional point cloud data of the user, and the method comprises the following steps:
processing the face depth information of the user by using a first convolution network to obtain a first face depth
Degree information;
processing the face color information of the user by using a second convolution network to obtain a first face color
Color information;
merging the first face depth information and the first face color information to obtain
First processed information;
processing the first processed information by utilizing a third convolutional network to obtain the face of the user
And (4) three-dimensional point cloud data corresponding to the data.
4. The method of claim 1, wherein the three-dimensional representation is based at least on the human face
Point cloud data, which is obtained by using a human face three-dimensional imaging model to obtain human face three-dimensional grid data of the user, and comprises the following steps:
processing the human face three-dimensional point cloud data by using a point cloud smoothing network to obtain smoothed data
The human face three-dimensional point cloud data;
processing the smoothed human face three-dimensional point cloud data by utilizing a connection relation reconstruction network,
and obtaining the human face three-dimensional grid data of the user.
5. The method of claim 1, wherein the at least one of the one or more parameters is based on the at least one of the one or more parameters
The three-dimensional grid data of user's face obtains the gauze mask parameter that the user corresponds still includes:
three-dimensional printing is carried out by utilizing a preset modeling method based on the human face three-dimensional grid data of the user,
obtaining a human face three-dimensional model of the user;
and obtaining mask parameters corresponding to the user based on the human face three-dimensional model of the user.
6. The method of claim 1, wherein the at least the user based person is
The three-dimensional grid data of face obtains the gauze mask parameter that the user corresponds includes:
obtaining the user pair based on the user-defined parameters of the user and the human face three-dimensional grid data of the user
Mask parameters to be met.
7. The method of any one of claims 1-6, wherein the mask parameters are up to
The method comprises the following steps: shape parameters, size parameters.
8. The method according to any one of claims 1 to 6, wherein the mask parameters further include a position parameter indicating a relative positional relationship between respective components of the mask.
9. An artificial intelligence based mask customizing method, comprising:
acquiring face data of different classes of users based on preset user classification standards, wherein each class is
The system comprises face data of a plurality of users, wherein the user classification standard at least comprises the following steps: gender information, age information;
acquiring a model by using the face feature based on the face data of the plurality of users of each user category,
obtaining human face three-dimensional point cloud data of a plurality of users of each category;
utilizing face three-dimensional point cloud data of a plurality of users of at least each user category
The dimensional imaging model obtains the human face three-dimensional grid data of a plurality of users of each user category;
averaging the vertex information of the face three-dimensional grid data of the plurality of users of each user category
Processing to obtain the human face three-dimensional grid data corresponding to each user category;
obtaining the user category at least based on the face three-dimensional grid data corresponding to each user category
Corresponding mask parameters.
10. An artificial intelligence based mask customizing device, comprising a processor and a memory, wherein at least one program instruction is stored in the memory, and the processor loads and executes the at least one program instruction to realize the artificial intelligence based mask customizing method according to any one of claims 1 to 9.
CN202010328855.2A 2020-04-23 2020-04-23 Mask customizing method and device based on artificial intelligence and storage medium Pending CN113554740A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010328855.2A CN113554740A (en) 2020-04-23 2020-04-23 Mask customizing method and device based on artificial intelligence and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010328855.2A CN113554740A (en) 2020-04-23 2020-04-23 Mask customizing method and device based on artificial intelligence and storage medium

Publications (1)

Publication Number Publication Date
CN113554740A true CN113554740A (en) 2021-10-26

Family

ID=78101126

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010328855.2A Pending CN113554740A (en) 2020-04-23 2020-04-23 Mask customizing method and device based on artificial intelligence and storage medium

Country Status (1)

Country Link
CN (1) CN113554740A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107006921A (en) * 2017-04-19 2017-08-04 北京随能科技有限公司 A kind of method for making personal customization mouth mask
CN107038752A (en) * 2017-04-07 2017-08-11 首都医科大学附属北京儿童医院 A kind of customized type mouth mask and preparation method thereof
US20170255185A1 (en) * 2016-03-01 2017-09-07 Glen D. Hinshaw System and method for generating custom shoe insole
CN108876913A (en) * 2018-07-11 2018-11-23 天门市志远信息科技有限公司 A kind of intelligent clothing clip system and method
CN109190533A (en) * 2018-08-22 2019-01-11 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment, computer readable storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170255185A1 (en) * 2016-03-01 2017-09-07 Glen D. Hinshaw System and method for generating custom shoe insole
CN107038752A (en) * 2017-04-07 2017-08-11 首都医科大学附属北京儿童医院 A kind of customized type mouth mask and preparation method thereof
CN107006921A (en) * 2017-04-19 2017-08-04 北京随能科技有限公司 A kind of method for making personal customization mouth mask
CN108876913A (en) * 2018-07-11 2018-11-23 天门市志远信息科技有限公司 A kind of intelligent clothing clip system and method
CN109190533A (en) * 2018-08-22 2019-01-11 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment, computer readable storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
查道安: "基于神经网络的三维人脸重建", 《安徽工程大学学报》, vol. 34, no. 2, 15 April 2019 (2019-04-15), pages 2 - 3 *
王钰涵: "基于脸型特征数据的防霾口罩造型设计研究", 《中国市场》, no. 07, 8 March 2017 (2017-03-08), pages 2 - 3 *

Similar Documents

Publication Publication Date Title
US11334971B2 (en) Digital image completion by learning generation and patch matching jointly
US11302064B2 (en) Method and apparatus for reconstructing three-dimensional model of human body, and storage medium
CN108510437B (en) Virtual image generation method, device, equipment and readable storage medium
US11250548B2 (en) Digital image completion using deep learning
US10489683B1 (en) Methods and systems for automatic generation of massive training data sets from 3D models for training deep learning networks
WO2022160701A1 (en) Special effect generation method and apparatus, device, and storage medium
CN110662484B (en) System and method for whole body measurement extraction
EP3567516B1 (en) Method and device for detecting feature point in image, and computer-readable storage medium
CN109952594B (en) Image processing method, device, terminal and storage medium
KR102442486B1 (en) 3D model creation method, apparatus, computer device and storage medium
US10878566B2 (en) Automatic teeth whitening using teeth region detection and individual tooth location
US20230073340A1 (en) Method for constructing three-dimensional human body model, and electronic device
CN108764143B (en) Image processing method, image processing device, computer equipment and storage medium
CN111402217B (en) Image grading method, device, equipment and storage medium
US20210097651A1 (en) Image processing method and apparatus, electronic device, and storage medium
US10853983B2 (en) Suggestions to enrich digital artwork
US11875468B2 (en) Three-dimensional (3D) image modeling systems and methods for determining respective mid-section dimensions of individuals
CN115439308A (en) Method for training fitting model, virtual fitting method and related device
CN110533761B (en) Image display method, electronic device and non-transient computer readable recording medium
CN111091055A (en) Face shape recognition method, device, equipment and computer readable storage medium
CN113554740A (en) Mask customizing method and device based on artificial intelligence and storage medium
CN111784611A (en) Portrait whitening method, portrait whitening device, electronic equipment and readable storage medium
TWI731447B (en) Beauty promotion device, beauty promotion system, beauty promotion method, and beauty promotion program
CN113076782A (en) Fan control method, device and computer readable storage medium
CN111126568B (en) Image processing method and device, electronic equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination