CN116630350A - Wig wearing monitoring management method and system - Google Patents

Wig wearing monitoring management method and system Download PDF

Info

Publication number
CN116630350A
CN116630350A CN202310919697.1A CN202310919697A CN116630350A CN 116630350 A CN116630350 A CN 116630350A CN 202310919697 A CN202310919697 A CN 202310919697A CN 116630350 A CN116630350 A CN 116630350A
Authority
CN
China
Prior art keywords
wearing
wig
image data
state
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310919697.1A
Other languages
Chinese (zh)
Other versions
CN116630350B (en
Inventor
魏海泉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Reese Fashion Shenzhen Co ltd
Original Assignee
Reese Fashion Shenzhen Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Reese Fashion Shenzhen Co ltd filed Critical Reese Fashion Shenzhen Co ltd
Priority to CN202310919697.1A priority Critical patent/CN116630350B/en
Publication of CN116630350A publication Critical patent/CN116630350A/en
Application granted granted Critical
Publication of CN116630350B publication Critical patent/CN116630350B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Human Computer Interaction (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to the field of artificial intelligence, and discloses a wig wearing monitoring management method and system, which are used for improving the analysis accuracy of wig wearing data and improving the accuracy of user hair detection. The method comprises the following steps: performing scalp area state identification on the first image data to obtain a scalp index set, and performing pixel-level image area segmentation on the first image data to generate a plurality of second image data; respectively carrying out root parameter analysis on the plurality of second image data to obtain a root index set; inputting the scalp index set into a preset wig wearing management model, inputting the hair root index set into the wig wearing management model for processing, generating a target feature vector, and inputting the target feature vector, the wearing frequency and the wearing time length into a regression prediction network in the wig wearing management model for performing wig wearing state analysis, so as to obtain a wig wearing state analysis result.

Description

Wig wearing monitoring management method and system
Technical Field
The invention relates to the field of artificial intelligence, in particular to a wig wearing monitoring management method and system.
Background
With the improvement of the living standard of people, more and more people begin to use wigs to improve the image of themselves. However, when wearing the wig, an improper wearing manner may adversely affect the scalp of the user, and may cause scalp diseases.
The existing scheme generally adopts an image recognition mode to detect the wig worn by the user, and combines manual correspondence to the wig area of the head of the user to analyze, so that scheme suggestion of wig wearing is provided for the user, and the detection accuracy of the existing scheme is low due to uncertainty of manual experience.
Disclosure of Invention
The invention provides a wig wearing monitoring management method and system, which are used for improving the analysis accuracy of wig wearing data and improving the accuracy of user hair detection.
The first aspect of the present invention provides a wig wearing monitoring management method, comprising:
acquiring wig wearing data of a target user based on a preset sensor, and performing wig wearing feature operation on the wig wearing data to obtain wearing frequency and wearing duration;
acquiring initial image data of the target user, and performing image area positioning on the initial image data to obtain first image data;
performing scalp area state identification on the first image data to obtain a scalp index set, and performing pixel-level image area segmentation on the first image data to generate a plurality of second image data;
Respectively carrying out root parameter analysis on the plurality of second image data to obtain a root index set;
inputting the scalp index set into a bidirectional threshold circulation network in a preset wig wearing management model to extract state characteristics to obtain a state characteristic vector, and inputting the hair root index set into a three-layer convolution network in the wig wearing management model to perform parameter characteristic analysis to obtain a parameter characteristic vector;
and carrying out vector fusion on the state feature vector and the parameter feature vector to obtain a target feature vector, and inputting the target feature vector, the wearing frequency and the wearing time length into a regression prediction network in the wig wearing management model to carry out wig wearing state analysis to obtain a wig wearing state analysis result.
With reference to the first aspect, in a first implementation manner of the first aspect of the present invention, the obtaining, based on a preset sensor, wig wearing data of a target user, and performing a wig wearing feature operation on the wig wearing data, to obtain a wearing frequency and a wearing duration, includes:
acquiring wig wearing data of a target user based on a preset sensor, and performing statistical duration analysis on the wig wearing data to obtain wearing duration;
Scanning the head area of the target user through the sensor, and constructing a head closed curved surface of the target user;
inputting the wearing time length and the head closed curved surface into a preset wearing frequency analysis function to calculate the wearing frequency, so as to obtain the wearing frequency;
the wearing frequency analysis function is as follows:
wherein P represents wearing frequency, T represents wearing time, S represents head closed curved surface, (x, y, z, T) represents wig wearing state at space point (x, y, z) at time T, the value of (x, y, z, T) comprises 0 or 1, the wearing is represented when the value of (x, y, z, T) is 0, the wearing is represented when the value of (x, y, z, T) is 1, T 0 And t 1 The start time and the end time of the time period are indicated, respectively.
With reference to the first aspect, in a second implementation manner of the first aspect of the present invention, the acquiring initial image data of the target user, and performing image area positioning on the initial image data, to obtain first image data includes:
acquiring initial image data of the target user, and performing polar coordinate conversion on the initial image data to obtain polar coordinate image data;
according to the polar coordinate image data, calculating a minimum angle and a maximum angle corresponding to the head area of the target user to obtain a target angle range;
Calculating the minimum radius and the maximum radius corresponding to the head area of the target user according to the polar coordinate image data, and generating a target radial range;
traversing all first characteristic points in the polar coordinate image data, and screening the first characteristic points based on the target angle range and the target radial range to obtain second characteristic points;
and extracting characteristic images of the polar coordinate image data according to the second characteristic points to obtain first image data.
With reference to the first aspect, in a third implementation manner of the first aspect of the present invention, the performing scalp region state identification on the first image data to obtain a scalp index set, and performing pixel-level image region segmentation on the first image data to generate a plurality of second image data includes:
performing scalp area state identification on the first image data to obtain a scalp index set, wherein the scalp index set comprises: dandruff index data and hair number index;
performing two-dimensional matrix conversion on the first image data to obtain a target two-dimensional matrix, and performing singular value decomposition on the target two-dimensional matrix to obtain a singular value decomposition result;
According to the singular value decomposition result, carrying out segmentation region probability calculation on all pixel points in the first image data to obtain a target probability value of each pixel point;
and carrying out pixel-level image region segmentation on the first image data according to the target probability value of each pixel point to generate a plurality of second image data.
With reference to the first aspect, in a fourth implementation manner of the first aspect of the present invention, the performing root parameter analysis on the plurality of second image data to obtain a root indicator set includes:
respectively carrying out edge detection on the plurality of second image data to obtain the contour line of each hairline;
calculating an average distance of the contour line from the scalp surface and a minimum distance between the contour line and the scalp surface according to the contour line of each hair;
according to the average distance and the minimum distance, separating the wig from the real hair of the target user to obtain target real hair;
performing root parameter analysis on the target real hair to obtain a root index set, wherein the root index set comprises: hair follicle number index and number of blocked hair follicles index.
With reference to the first aspect, in a fifth implementation manner of the first aspect of the present invention, the inputting the scalp index set into a bidirectional threshold circulation network in a preset wig wearing management model to perform state feature extraction to obtain a state feature vector, and inputting the root index set into a three-layer convolution network in the wig wearing management model to perform parameter feature analysis to obtain a parameter feature vector includes:
inputting the scalp index set into a bidirectional threshold circulation network in a preset wig wearing management model to extract state characteristics, and obtaining a forward hidden state vector and a backward hidden state vector;
vector stitching is carried out on the forward hidden state vector and the backward hidden state vector to obtain a state characteristic vector;
inputting the hairroot index set into a three-layer convolution network in the wig wearing management model, and carrying out parameter characteristic analysis on the hairroot index set through the three-layer convolution network to obtain a plurality of parameter characteristic values;
and carrying out vector conversion on the plurality of parameter characteristic values to generate a parameter characteristic vector.
With reference to the first aspect, in a sixth implementation manner of the first aspect of the present invention, the performing vector fusion on the state feature vector and the parameter feature vector to obtain a target feature vector, and inputting the target feature vector, the wearing frequency and the wearing duration into a regression prediction network in the wig wearing management model to perform wig wearing state analysis, so as to obtain a wig wearing state analysis result, where the method includes:
Vector fusion is carried out on the state characteristic vector and the parameter characteristic vector to obtain a target characteristic vector;
matching weight and deviation parameters corresponding to a regression prediction network in the wig wearing management model according to the wearing frequency and the wearing duration;
and inputting the target feature vector into the regression prediction network to analyze the wearing state of the wig based on the weight and the deviation parameter, and obtaining the analysis result of the wearing state of the wig.
A second aspect of the present invention provides a wig wearing monitoring management system comprising:
the device comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring wig wearing data of a target user based on a preset sensor, and performing wig wearing feature operation on the wig wearing data to acquire wearing frequency and wearing duration;
the positioning module is used for acquiring initial image data of the target user, and performing image area positioning on the initial image data to obtain first image data;
the segmentation module is used for carrying out scalp area state recognition on the first image data to obtain a scalp index set, and carrying out pixel-level image area segmentation on the first image data to generate a plurality of second image data;
The analysis module is used for respectively carrying out root parameter analysis on the plurality of second image data to obtain a root index set;
the processing module is used for inputting the scalp index set into a bidirectional threshold circulation network in a preset wig wearing management model to extract state characteristics to obtain a state characteristic vector, and inputting the hair root index set into a three-layer convolution network in the wig wearing management model to perform parameter characteristic analysis to obtain a parameter characteristic vector;
and the prediction module is used for carrying out vector fusion on the state feature vector and the parameter feature vector to obtain a target feature vector, and inputting the target feature vector, the wearing frequency and the wearing duration into a regression prediction network in the wig wearing management model to carry out wig wearing state analysis to obtain a wig wearing state analysis result.
A third aspect of the present invention provides a wig wearing monitoring management apparatus comprising: a memory and at least one processor, the memory having instructions stored therein; the at least one processor invokes the instructions in the memory to cause the wig wear monitoring management device to perform the wig wear monitoring management method described above.
A fourth aspect of the present invention provides a computer-readable storage medium having instructions stored therein, which when run on a computer, cause the computer to perform the above-described wig wear monitoring management method.
In the technical scheme provided by the invention, scalp area state identification is carried out on the first image data to obtain a scalp index set, and pixel-level image area segmentation is carried out on the first image data to generate a plurality of second image data; respectively carrying out root parameter analysis on the plurality of second image data to obtain a root index set; the scalp index set is input into a preset wig wearing management model, the hair root index set is input into the wig wearing management model for processing, a target feature vector is generated, the target feature vector, the wearing frequency and the wearing time length are input into a regression prediction network in the wig wearing management model for performing wig wearing state analysis, and a wig wearing state analysis result is obtained.
Drawings
FIG. 1 is a schematic view showing an embodiment of a wig wearing monitoring management method according to the present invention;
FIG. 2 is a flow chart of image region positioning in an embodiment of the invention;
FIG. 3 is a flowchart of pixel level image region segmentation in accordance with an embodiment of the present invention;
FIG. 4 is a flow chart of root parameter analysis in an embodiment of the present invention;
FIG. 5 is a schematic view of an embodiment of a wig wear monitoring management system according to the present invention;
fig. 6 is a schematic view showing an embodiment of the wig wearing monitoring management apparatus in the embodiment of the present invention.
Detailed Description
The embodiment of the invention provides a wig wearing monitoring management method and system, which are used for improving the analysis accuracy of wig wearing data and improving the accuracy of user hair detection. The terms "first," "second," "third," "fourth" and the like in the description and in the claims and in the above drawings, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments described herein may be implemented in other sequences than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus.
For ease of understanding, a specific flow of an embodiment of the present invention will be described below with reference to fig. 1, and one embodiment of a wig wear monitoring management method according to an embodiment of the present invention includes:
s101, acquiring wig wearing data of a target user based on a preset sensor, and performing wig wearing feature operation on the wig wearing data to obtain wearing frequency and wearing duration;
it will be appreciated that the executing body of the present invention may be a wig wearing monitoring management system, and may also be a terminal or a server, which is not limited herein. The embodiment of the invention is described by taking a server as an execution main body as an example.
In particular, the server may calculate the wearing frequency using triple integration, assuming that the head area of the wig worn by the target user may be regarded as an area in a three-dimensional space, and assuming that the area may be surrounded by a certain closed curved surface S. The specific formula is as follows:
wearing frequency =
Wherein P represents wearing frequency, T represents wearing time, S represents head closed curved surface, (x, y, z, T) represents wig wearing state at space point (x, y, z) at time T, the value of (x, y, z, T) comprises 0 or 1, the wearing is represented when the value of (x, y, z, T) is 0, the wearing is represented when the value of (x, y, z, T) is 1, T 0 And t 1 The start time and the end time of the time period are indicated, respectively. The double integral in the above formula traverses the whole closed curved surface S and accumulates each position; the innermost single integration traverses all the involved time points and accumulates the hair wear status inside S at each time point. The final value is the wearing frequency.
S102, acquiring initial image data of a target user, and performing image area positioning on the initial image data to obtain first image data;
specifically, the server is configured to attach the auxiliary device to the hair imaging device of the hair detection apparatus such that the imaging unit of the hair imaging device corresponds to the center hole of the main body portion of the auxiliary device. When the wig to be detected needs to be shot, the annular rotating part is stirred, so that the annular rotating part drives the rotating blades to rotate through the connecting rods to open the central hole, and then the imaging unit can shoot an image of the area to be detected in a state that the central hole is opened; meanwhile, in the process of stirring the annular rotating part, the opening size of the central hole can be changed, so that the size of a shooting area can be freely adjusted in a certain area, and the operability of hair detection is improved. The angular range occupied by the region is first determined, denoted θmin and θmax, representing the minimum and maximum angles of the region, respectively. The radial extent, i.e. the extent of the distance or size, of the region is then determined, denoted rmin and rmax, representing the minimum and maximum radii of the region, respectively. For each point in the polar image, the corresponding angle θ and radius r are calculated to determine if it falls within the determined region. Specifically, the point is located within the determined area if the following condition is satisfied: θmin < = θ < = θmax; rmin < = r < = rmax; and (3) counting the number of points or other characteristic values positioned in the determined area by traversing all points in the polar coordinate image, so as to realize the purpose of area positioning.
Further, according to the result of the region positioning, a coordinate range (for example, coordinates of the upper left corner and the lower right corner) corresponding to the region is obtained. And reading the original image file by using an image processing library or writing codes by the image processing library, and converting the original image file into a two-dimensional array format. Further preprocessing, such as cropping, rotation, scaling, etc., may be required before extracting the target region from the image. These operations can be selectively processed according to the specific circumstances. And extracting a target image region from the two-dimensional array of the original image according to the coordinate range of the region positioning. This may be achieved by slicing (sliding) the two-dimensional array. And saving the extracted target image in a disk, or directly transmitting the extracted target image to a subsequent algorithm model for processing.
S103, performing scalp area state identification on the first image data to obtain a scalp index set, and performing pixel-level image area segmentation on the first image data to generate a plurality of second image data;
the scalp health monitoring method comprises the steps that scalp health monitoring of a user is achieved by obtaining scalp image information of the user, and a first detector model is called to detect pictures containing hair and/or scalp information to obtain information of root areas of the hair, the quantity of dandruff and the quantity of inflammation; the matrix decomposition algorithm may be used for pixel-level image region segmentation, and finally, the probability that each pixel belongs to each segmented region is obtained, the pixels are allocated to the corresponding segmented regions, and finally, a plurality of second image data are generated.
S104, respectively carrying out root parameter analysis on the plurality of second image data to obtain a root index set;
specifically, for a given hair image, the contour line of each hair needs to be extracted first, which can be achieved by using image processing technologies, such as Canny edge detection, threshold segmentation, and the like, and then, for the contour line of each hair, the average distance from the scalp surface and the minimum distance between the contour line and the scalp surface are calculated, and then, the smaller the root parameter is calculated by using the distance information, the closer the contour line is to the scalp, so that the probability is higher that the hair is real hair; and the larger the distance between the outline and the scalp, the more likely the wig is, and finally the set of root indexes is generated according to the root parameters.
S105, inputting a scalp index set into a bidirectional threshold circulation network in a preset wig wearing management model to extract state characteristics to obtain a state characteristic vector, and inputting a hair root index set into a three-layer convolution network in the wig wearing management model to perform parameter characteristic analysis to obtain a parameter characteristic vector;
in the wig wearing management model, the two index sets are processed by different methods. And for the scalp index set, a bidirectional threshold cyclic network is used for extracting state characteristics. The neural network model is suitable for sequence data, has good memory capacity, and can capture long-term dependence in an input sequence. The advantage of using a bi-directional network is that the input sequence can be viewed simultaneously from both directions, which helps to improve the coverage and accuracy of the state features. The scalp index set is used as the input of a model, and in the training process, the model automatically learns how to extract important state features from the scalp index and converts the important state features into a vector with fixed length, namely a state feature vector. And for the rooting index set, performing parameter characteristic analysis by using a three-layer convolution network. The convolutional neural network is a neural network model widely applied to image processing and computer vision, and has good feature extraction capability. The set of root indices is herein considered as a two-dimensional matrix, with each row representing a set of index data. The spatial relationship between the index data can be extracted through convolution operation, and simultaneously, the characteristic dimension is reduced through pooling operation, so that a parameter characteristic vector with fixed length is finally obtained. The state feature vector and the parameter feature vector are important outputs of the wig wearing management model and reflect different aspects of the head condition of the wearer and the wig wearing condition. These feature vectors may be input into a classifier or regressor to determine if the wearer is wearing the hairpiece correctly and to give corresponding advice. By the aid of the method, problems in the wig wearing process can be effectively monitored, correction is timely carried out, and comfort and safety of a wearer are guaranteed.
S106, carrying out vector fusion on the state feature vector and the parameter feature vector to obtain a target feature vector, and inputting the target feature vector, the wearing frequency and the wearing time length into a regression prediction network in the wig wearing management model to carry out wig wearing state analysis, so as to obtain a wig wearing state analysis result.
In the wig wearing management model, the state feature vector and the parameter feature vector may be used to determine, and the two feature vectors may be fused to obtain the target feature vector. After the target feature vector is obtained, the target feature vector, the wearing frequency and the wearing time length can be input into a regression prediction network together for wig wearing state analysis. The regression prediction network is a neural network model capable of learning output values from input data, and is suitable for solving the regression problem, i.e. predicting corresponding output values given a set of input features. Here, it is desirable to analyze the wearing status of the wig, including information on whether the wearing is correct, the tightness of the wearing, the wearing position, etc., through a regression prediction network, wherein training the regression prediction network requires a large amount of labeling data, i.e., a data set of known wearing status and related characteristics. The data may be obtained by manual labeling or automated labeling, etc. In the training process, the network learns according to the input characteristics such as the target feature vector, the wearing frequency, the wearing time length and the like and the marked wearing state, so that the wearing state of a new unknown sample can be predicted. By the aid of the method, automatic analysis and judgment of the wearing state of the wig can be achieved, manual intervention is reduced, and efficiency is improved. Finally, the regression prediction network will output the hairpiece wearing state analysis result, which may reflect various aspects of the hairpiece wearing state information. This result may be used to instruct the wearer to properly wear the hairpiece and provide corresponding advice. Meanwhile, the wig can be used for improving the design and the manufacture of wigs by manufacturers so as to improve the use experience and satisfaction of users.
In the embodiment of the invention, scalp area state identification is carried out on first image data to obtain a scalp index set, and pixel-level image area segmentation is carried out on the first image data to generate a plurality of second image data; respectively carrying out root parameter analysis on the plurality of second image data to obtain a root index set; the scalp index set is input into a preset wig wearing management model, the hair root index set is input into the wig wearing management model for processing, a target feature vector is generated, the target feature vector, the wearing frequency and the wearing time length are input into a regression prediction network in the wig wearing management model for performing wig wearing state analysis, and a wig wearing state analysis result is obtained.
In a specific embodiment, the process of executing step S101 may specifically include the following steps:
(1) Acquiring wig wearing data of a target user based on a preset sensor, and performing statistical duration analysis on the wig wearing data to obtain wearing duration;
(2) Scanning a head area of a target user through a sensor, and constructing a head closed curved surface of the target user;
(3) Inputting the wearing time length and the head closed curved surface into a preset wearing frequency analysis function to calculate the wearing frequency, so as to obtain the wearing frequency;
specifically, wig wearing data of a target user are obtained based on a preset sensor, statistical duration analysis is performed on the wig wearing data to obtain wearing duration, the sensor is used for scanning the head area of the target user, a head closed curved surface of the target user is constructed, the wearing duration and the head closed curved surface are input into a preset wearing frequency analysis function to carry out wearing frequency calculation, and the wearing frequency analysis function is:
wherein P represents wearing frequency, T represents wearing time, S represents head closed curved surface, (x, y, z, T) represents wig wearing state at space point (x, y, z) at time T, the value of (x, y, z, T) comprises 0 or 1, the wearing is represented when the value of (x, y, z, T) is 0, the wearing is represented when the value of (x, y, z, T) is 1, T 0 And t 1 The start time and the end time of the time period are indicated, respectively.
In a specific embodiment, as shown in fig. 2, the process of executing step S102 may specifically include the following steps:
s201, acquiring initial image data of a target user, and performing polar coordinate conversion on the initial image data to obtain polar coordinate image data;
s202, calculating a minimum angle and a maximum angle corresponding to a head region of a target user according to polar coordinate image data to obtain a target angle range;
s203, calculating the minimum radius and the maximum radius corresponding to the head area of the target user according to the polar coordinate image data, and generating a target radial range;
s204, traversing all first characteristic points in the polar coordinate image data, and screening the first characteristic points based on the target angle range and the target radial range to obtain second characteristic points;
and S205, extracting characteristic images of the polar coordinate image data according to the second characteristic points to obtain first image data.
Specifically, the server acquires initial image data of the target user, and performs polar coordinate conversion on the initial image data to obtain polar coordinate image data. Polar transformation is a mathematical method of converting points in a rectangular coordinate system to points in a polar coordinate system. The information of the head region can be extracted by performing polar coordinate conversion on the initial image data, and the minimum angle and the maximum angle corresponding to the head region of the target user are calculated according to the polar coordinate image data, so that the target angle range is obtained. The angular range in the polar image data may be used to determine the position and size of the head region. And calculating the minimum radius and the maximum radius corresponding to the head region of the target user according to the polar coordinate image data by calculating the minimum angle and the maximum angle of the head region in the polar coordinate image data, and generating the target radial range. The radial extent in the polar image data may be used to determine the size and shape of the head region. And obtaining a target radial range by calculating the minimum radius and the maximum radius of the head region in the polar coordinate image data, traversing all first characteristic points in the polar coordinate image data, and screening the first characteristic points based on the target angle range and the target radial range to obtain second characteristic points. The first feature point refers to a bright spot in the polar image representing an important feature in the head region. And traversing all the first characteristic points, screening according to the target angle range and the target radial range to obtain a second characteristic point set, and extracting characteristic images of the polar coordinate image data according to the second characteristic points to obtain first image data. The information contained in the second set of feature points may be used to generate a feature image that effectively reflects the shape and characteristics of the target user's head region. And combining the second characteristic points with the polar coordinate image data to obtain first image data subjected to characteristic extraction and processing.
In a specific embodiment, as shown in fig. 3, the process of executing step S103 may specifically include the following steps:
s301, performing scalp area state identification on the first image data to obtain a scalp index set, wherein the scalp index set comprises: dandruff index data and hair number index;
s302, performing two-dimensional matrix conversion on the first image data to obtain a target two-dimensional matrix, and performing singular value decomposition on the target two-dimensional matrix to obtain a singular value decomposition result;
s303, carrying out segmentation region probability calculation on all pixel points in the first image data according to a singular value decomposition result to obtain a target probability value of each pixel point;
s304, according to the target probability value of each pixel point, pixel-level image region segmentation is carried out on the first image data, and a plurality of second image data are generated.
Specifically, the server performs scalp area state identification on the first image data to obtain a scalp index set, wherein the scalp index set comprises: dandruff index data and hair number index. The scalp area status recognition of the first image data may result in information about scalp condition, which may be represented by calculating dandruff index data and hair number index. Performing two-dimensional matrix conversion on the first image data to obtain a target two-dimensional matrix, and performing singular value decomposition on the target two-dimensional matrix to obtain a singular value decomposition result. The first image data may be converted into a matrix form by two-dimensional matrix conversion. The singular value decomposition is a linear algebra-based method, and can decompose a matrix into a product form of three matrices, so that principal component information of the matrix is obtained, and according to a singular value decomposition result, segmentation region probability calculation is performed on all pixel points in the first image data, so as to obtain a target probability value of each pixel point. The singular value decomposition result can be used to calculate the probability value of the segmentation region where each pixel is located, i.e. the probability that the pixel belongs to the head region. This task may be handled using machine learning methods, such as deep learning based convolutional neural networks. And carrying out pixel-level image region segmentation on the first image data according to the target probability value of each pixel point to generate a plurality of second image data. Each pixel point in the first image data can be divided into different image areas according to the probability values of the pixel points, so that a plurality of characteristic images are obtained. Thus, the detail and edge information in the head region can be more accurately identified, and the robustness and effect of the model are improved.
In a specific embodiment, as shown in fig. 4, the process of executing step S104 may specifically include the following steps:
s401, respectively carrying out edge detection on a plurality of second image data to obtain the contour line of each hairline;
s402, calculating the average distance between the contour line and the scalp surface and the minimum distance between the contour line and the scalp surface according to the contour line of each hair;
s403, separating wigs from real hair of the target user according to the average distance and the minimum distance to obtain target real hair;
s404, performing root parameter analysis on target real hair to obtain a root index set, wherein the root index set comprises: hair follicle number index and number of blocked hair follicles index.
Specifically, the server performs edge detection on the plurality of second image data to obtain the contour line of each hairline. Edge detection is an image processing method, and can be used for extracting features such as a contour line of a target object from an image. By performing edge detection on the plurality of second image data, contour line information of each hair can be obtained. From the contour of each hair, the average distance of the contour from the scalp surface and the minimum distance between the contour and the scalp surface are calculated. The hairlines can be classified according to the distance information between the contour lines and the scalp surface, and the wigs and the real hair can be distinguished. And simultaneously, indexes such as average distance, minimum distance and the like can be calculated to reflect the relationship between the hairline and the scalp. And separating the wig from the real hair of the target user according to the average distance and the minimum distance to obtain the target real hair. By classifying the hair, the real hair information of the target user can be obtained. Performing root parameter analysis on target real hair to obtain a root index set, wherein the root index set comprises: hair follicle number index and number of blocked hair follicles index. Analysis of real hair can yield information about the root, such as an indication of the number of hair follicles and the number of blocked hair follicles.
In a specific embodiment, the process of executing step S105 may specifically include the following steps:
(1) Inputting the scalp index set into a bidirectional threshold circulation network in a preset wig wearing management model to extract state characteristics, and obtaining a forward hidden state vector and a backward hidden state vector;
(2) Vector stitching is carried out on the forward hidden state vector and the backward hidden state vector to obtain a state characteristic vector;
(3) Inputting the hairroot index set into a three-layer convolution network in the wig wearing management model, and carrying out parameter characteristic analysis on the hairroot index set through the three-layer convolution network to obtain a plurality of parameter characteristic values;
(4) And carrying out vector conversion on the plurality of parameter characteristic values to generate a parameter characteristic vector.
Specifically, the scalp index set is input into a bidirectional threshold circulation network in a preset wig wearing management model to extract state characteristics, and a forward hidden state vector and a backward hidden state vector are obtained. By inputting the scalp index set into the bi-directional threshold cycle network, forward and backward hidden state vectors can be obtained. And vector stitching is carried out on the forward hidden state vector and the backward hidden state vector to obtain a state characteristic vector. The forward and backward hidden state vectors are spliced to obtain a state feature vector which contains key information of the scalp condition. Inputting the hairroot index set into a three-layer convolution network in the wig wearing management model, and carrying out parameter characteristic analysis on the hairroot index set through the three-layer convolution network to obtain a plurality of parameter characteristic values. And performing feature analysis on the rooting index set through a three-layer convolution network to obtain a plurality of related parameter feature values. And carrying out vector conversion on the plurality of parameter characteristic values to generate a parameter characteristic vector. Vector conversion of the plurality of parameter feature values can obtain a parameter feature vector, wherein the vector contains key features of the rooting information.
In a specific embodiment, the process of executing step S106 may specifically include the following steps:
(1) Vector fusion is carried out on the state feature vector and the parameter feature vector to obtain a target feature vector;
(2) Matching weight and deviation parameters corresponding to a regression prediction network in the wig wearing management model according to the wearing frequency and the wearing time length;
(3) And inputting the target feature vector into a regression prediction network to analyze the wearing state of the wig based on the weight and the deviation parameter, and obtaining the analysis result of the wearing state of the wig.
Specifically, vector fusion is performed on the state feature vector and the parameter feature vector, and the target feature vector is obtained. The state feature vector and the parameter feature vector are fused to obtain a target feature vector, and the target feature vector integrates various information such as scalp conditions, hair root features and the like. And matching the weight and deviation parameters corresponding to the regression prediction network in the wig wearing management model according to the wearing frequency and the wearing duration. And selecting a proper regression prediction network by matching the information such as the wearing frequency, the wearing duration and the like, and acquiring the corresponding weight and deviation parameters. And inputting the target feature vector into a regression prediction network to analyze the wearing state of the wig based on the weight and the deviation parameter, and obtaining the analysis result of the wearing state of the wig. And inputting the target feature vector into a regression prediction network for analysis to obtain information about the wearing state of the wig, such as wearing posture, wearing position, wearing condition and the like. The information can be used for realizing automatic management and optimization of wig wearing, and improving wearing effect and use experience.
The method for managing wig wear in the embodiment of the present invention is described above, and the system for managing wig wear in the embodiment of the present invention is described below, referring to fig. 5, where an embodiment of the system for managing wig wear in the embodiment of the present invention includes:
the obtaining module 501 is configured to obtain wig wearing data of a target user based on a preset sensor, and perform wig wearing feature operation on the wig wearing data to obtain a wearing frequency and a wearing duration;
the positioning module 502 is configured to obtain initial image data of the target user, and perform image area positioning on the initial image data to obtain first image data;
a segmentation module 503, configured to perform scalp region state recognition on the first image data to obtain a scalp index set, and perform pixel-level image region segmentation on the first image data to generate a plurality of second image data;
the analysis module 504 is configured to perform root parameter analysis on the plurality of second image data, to obtain a root indicator set;
the processing module 505 is configured to input the scalp index set into a bidirectional threshold circulation network in a preset wig wearing management model to perform state feature extraction to obtain a state feature vector, and input the hair root index set into a three-layer convolution network in the wig wearing management model to perform parameter feature analysis to obtain a parameter feature vector;
And the prediction module 506 is configured to perform vector fusion on the state feature vector and the parameter feature vector to obtain a target feature vector, and input the target feature vector, the wearing frequency and the wearing duration into a regression prediction network in the wig wearing management model to perform wig wearing state analysis, so as to obtain a wig wearing state analysis result.
Performing scalp region state identification on the first image data through the cooperative cooperation of the components to obtain a scalp index set, and performing pixel-level image region segmentation on the first image data to generate a plurality of second image data; respectively carrying out root parameter analysis on the plurality of second image data to obtain a root index set; the scalp index set is input into a preset wig wearing management model, the hair root index set is input into the wig wearing management model for processing, a target feature vector is generated, the target feature vector, the wearing frequency and the wearing time length are input into a regression prediction network in the wig wearing management model for performing wig wearing state analysis, and a wig wearing state analysis result is obtained.
The wig wear monitoring management system in the embodiment of the present invention is described in detail from the point of view of the modularized functional entity in fig. 5 above, and the wig wear monitoring management device in the embodiment of the present invention is described in detail from the point of view of hardware processing below.
Fig. 6 is a schematic structural diagram of a wig wearing monitoring management device according to an embodiment of the present invention, where the wig wearing monitoring management device 600 may have a relatively large difference according to a configuration or performance, and may include one or more processors (central processing units, CPU) 610 (e.g., one or more processors) and a memory 620, and one or more storage media 630 (e.g., one or more mass storage devices) storing application programs 633 or data 632. Wherein the memory 620 and the storage medium 630 may be transitory or persistent storage. The program stored in the storage medium 630 may include one or more modules (not shown), each of which may include a series of instruction operations to the wig wear monitoring management device 600. Still further, the processor 610 may be configured to communicate with the storage medium 630 to execute a series of instruction operations in the storage medium 630 on the wig wear monitoring management device 600.
The wig wear monitoring management device 600 may also include one or more power sources 640, one or more wired or wireless network interfaces 650, one or more input/output interfaces 660, and/or one or more operating systems 631, such as Windows Serve, mac OS X, unix, linux, reeBSD, and the like. It will be appreciated by those skilled in the art that the configuration of the wig wear monitoring management device shown in fig. 6 is not limiting of the wig wear monitoring management device, and may include more or less components than illustrated, or may be combined with certain components, or may be arranged with different components.
The present invention also provides a wig wearing monitoring management device, including a memory and a processor, where the memory stores computer readable instructions, and the computer readable instructions, when executed by the processor, cause the processor to execute the steps of the wig wearing monitoring management method in the above embodiments.
The present invention also provides a computer readable storage medium, which may be a non-volatile computer readable storage medium, and may also be a volatile computer readable storage medium, in which instructions are stored which, when executed on a computer, cause the computer to perform the steps of the wig wear monitoring management method.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a random access memory (random acceS memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. A wig wear monitoring management method, characterized in that the wig wear monitoring management method comprises:
acquiring wig wearing data of a target user based on a preset sensor, and performing wig wearing feature operation on the wig wearing data to obtain wearing frequency and wearing duration;
acquiring initial image data of the target user, and performing image area positioning on the initial image data to obtain first image data;
performing scalp area state identification on the first image data to obtain a scalp index set, and performing pixel-level image area segmentation on the first image data to generate a plurality of second image data;
Respectively carrying out root parameter analysis on the plurality of second image data to obtain a root index set;
inputting the scalp index set into a bidirectional threshold circulation network in a preset wig wearing management model to extract state characteristics to obtain a state characteristic vector, and inputting the hair root index set into a three-layer convolution network in the wig wearing management model to perform parameter characteristic analysis to obtain a parameter characteristic vector;
and carrying out vector fusion on the state feature vector and the parameter feature vector to obtain a target feature vector, and inputting the target feature vector, the wearing frequency and the wearing time length into a regression prediction network in the wig wearing management model to carry out wig wearing state analysis to obtain a wig wearing state analysis result.
2. The wig wearing monitoring management method according to claim 1, wherein the obtaining the wig wearing data of the target user based on the preset sensor, and performing the wig wearing feature operation on the wig wearing data to obtain the wearing frequency and the wearing duration, includes:
acquiring wig wearing data of a target user based on a preset sensor, and performing statistical duration analysis on the wig wearing data to obtain wearing duration;
Scanning the head area of the target user through the sensor, and constructing a head closed curved surface of the target user;
inputting the wearing time length and the head closed curved surface into a preset wearing frequency analysis function to calculate the wearing frequency, so as to obtain the wearing frequency;
the wearing frequency analysis function is as follows:
wherein P represents wearing frequency, T represents wearing time, S represents head closed curved surface, (x, y, z, T) represents wig wearing state at space point (x, y, z) at time T, the value of (x, y, z, T) comprises 0 or 1, the wearing is represented when the value of (x, y, z, T) is 0, the wearing is represented when the value of (x, y, z, T) is 1, T 0 And t 1 The start time and the end time of the time period are indicated, respectively.
3. The wig wearing monitoring management method according to claim 2, wherein the obtaining the initial image data of the target user and performing image area location on the initial image data to obtain the first image data comprises:
acquiring initial image data of the target user, and performing polar coordinate conversion on the initial image data to obtain polar coordinate image data;
according to the polar coordinate image data, calculating a minimum angle and a maximum angle corresponding to the head area of the target user to obtain a target angle range;
Calculating the minimum radius and the maximum radius corresponding to the head area of the target user according to the polar coordinate image data, and generating a target radial range;
traversing all first characteristic points in the polar coordinate image data, and screening the first characteristic points based on the target angle range and the target radial range to obtain second characteristic points;
and extracting characteristic images of the polar coordinate image data according to the second characteristic points to obtain first image data.
4. The method of monitoring and managing wig wearing according to claim 1, wherein the performing scalp region state recognition on the first image data to obtain a scalp index set, and performing pixel-level image region segmentation on the first image data to generate a plurality of second image data, comprises:
performing scalp area state identification on the first image data to obtain a scalp index set, wherein the scalp index set comprises: dandruff index data and hair number index;
performing two-dimensional matrix conversion on the first image data to obtain a target two-dimensional matrix, and performing singular value decomposition on the target two-dimensional matrix to obtain a singular value decomposition result;
According to the singular value decomposition result, carrying out segmentation region probability calculation on all pixel points in the first image data to obtain a target probability value of each pixel point;
and carrying out pixel-level image region segmentation on the first image data according to the target probability value of each pixel point to generate a plurality of second image data.
5. The wig wearing monitoring management method according to claim 1, wherein the performing root parameter analysis on the plurality of second image data to obtain a set of root indicators comprises:
respectively carrying out edge detection on the plurality of second image data to obtain the contour line of each hairline;
calculating an average distance of the contour line from the scalp surface and a minimum distance between the contour line and the scalp surface according to the contour line of each hair;
according to the average distance and the minimum distance, separating the wig from the real hair of the target user to obtain target real hair;
performing root parameter analysis on the target real hair to obtain a root index set, wherein the root index set comprises: hair follicle number index and number of blocked hair follicles index.
6. The method according to claim 1, wherein the step of inputting the scalp index set into a bidirectional threshold circulation network in a preset wig wearing management model to perform state feature extraction to obtain a state feature vector, and inputting the root index set into a three-layer convolution network in the wig wearing management model to perform parameter feature analysis to obtain a parameter feature vector comprises:
inputting the scalp index set into a bidirectional threshold circulation network in a preset wig wearing management model to extract state characteristics, and obtaining a forward hidden state vector and a backward hidden state vector;
vector stitching is carried out on the forward hidden state vector and the backward hidden state vector to obtain a state characteristic vector;
inputting the hairroot index set into a three-layer convolution network in the wig wearing management model, and carrying out parameter characteristic analysis on the hairroot index set through the three-layer convolution network to obtain a plurality of parameter characteristic values;
and carrying out vector conversion on the plurality of parameter characteristic values to generate a parameter characteristic vector.
7. The method of claim 1, wherein the performing vector fusion on the state feature vector and the parameter feature vector to obtain a target feature vector, and inputting the target feature vector, the wearing frequency, and the wearing duration into a regression prediction network in the wig wearing management model to perform wig wearing state analysis, to obtain a wig wearing state analysis result, includes:
Vector fusion is carried out on the state characteristic vector and the parameter characteristic vector to obtain a target characteristic vector;
matching weight and deviation parameters corresponding to a regression prediction network in the wig wearing management model according to the wearing frequency and the wearing duration;
and inputting the target feature vector into the regression prediction network to analyze the wearing state of the wig based on the weight and the deviation parameter, and obtaining the analysis result of the wearing state of the wig.
8. A wig wear monitoring management system, characterized in that the wig wear monitoring management system comprises:
the device comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring wig wearing data of a target user based on a preset sensor, and performing wig wearing feature operation on the wig wearing data to acquire wearing frequency and wearing duration;
the positioning module is used for acquiring initial image data of the target user, and performing image area positioning on the initial image data to obtain first image data;
the segmentation module is used for carrying out scalp area state recognition on the first image data to obtain a scalp index set, and carrying out pixel-level image area segmentation on the first image data to generate a plurality of second image data;
The analysis module is used for respectively carrying out root parameter analysis on the plurality of second image data to obtain a root index set;
the processing module is used for inputting the scalp index set into a bidirectional threshold circulation network in a preset wig wearing management model to extract state characteristics to obtain a state characteristic vector, and inputting the hair root index set into a three-layer convolution network in the wig wearing management model to perform parameter characteristic analysis to obtain a parameter characteristic vector;
and the prediction module is used for carrying out vector fusion on the state feature vector and the parameter feature vector to obtain a target feature vector, and inputting the target feature vector, the wearing frequency and the wearing duration into a regression prediction network in the wig wearing management model to carry out wig wearing state analysis to obtain a wig wearing state analysis result.
9. A wig wearing monitoring management device, characterized in that the wig wearing monitoring management device comprises: a memory and at least one processor, the memory having instructions stored therein;
the at least one processor invokes the instructions in the memory to cause the wig wear monitoring management device to perform the wig wear monitoring management method of any of claims 1-7.
10. A computer readable storage medium having instructions stored thereon, wherein the instructions when executed by a processor implement the wig wear monitoring management method of any of claims 1-7.
CN202310919697.1A 2023-07-26 2023-07-26 Wig wearing monitoring management method and system Active CN116630350B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310919697.1A CN116630350B (en) 2023-07-26 2023-07-26 Wig wearing monitoring management method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310919697.1A CN116630350B (en) 2023-07-26 2023-07-26 Wig wearing monitoring management method and system

Publications (2)

Publication Number Publication Date
CN116630350A true CN116630350A (en) 2023-08-22
CN116630350B CN116630350B (en) 2023-10-03

Family

ID=87613907

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310919697.1A Active CN116630350B (en) 2023-07-26 2023-07-26 Wig wearing monitoring management method and system

Country Status (1)

Country Link
CN (1) CN116630350B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200027244A1 (en) * 2018-07-23 2020-01-23 Kabushiki Kaisha Toshiba Information processing apparatus, information processing method, and computer program product
CN112587132A (en) * 2020-12-25 2021-04-02 苏州生昇合利信息技术有限公司 Wearable monitoring system convenient to disassemble and assemble and use method thereof
CN112802031A (en) * 2021-01-06 2021-05-14 浙江工商大学 Real-time virtual hair trial method based on three-dimensional human head tracking
CN113723302A (en) * 2021-08-31 2021-11-30 上海东普信息科技有限公司 Helmet wearing state detection method, device, equipment and storage medium
CN114821737A (en) * 2022-05-13 2022-07-29 浙江工商大学 Moving end real-time wig try-on method based on three-dimensional face alignment
CN114973080A (en) * 2022-05-18 2022-08-30 深圳能源环保股份有限公司 Method, device, equipment and storage medium for detecting wearing of safety helmet
CN114998830A (en) * 2022-05-20 2022-09-02 济南信通达电气科技有限公司 Wearing detection method and system for safety helmet of transformer substation personnel

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200027244A1 (en) * 2018-07-23 2020-01-23 Kabushiki Kaisha Toshiba Information processing apparatus, information processing method, and computer program product
CN112587132A (en) * 2020-12-25 2021-04-02 苏州生昇合利信息技术有限公司 Wearable monitoring system convenient to disassemble and assemble and use method thereof
CN112802031A (en) * 2021-01-06 2021-05-14 浙江工商大学 Real-time virtual hair trial method based on three-dimensional human head tracking
CN113723302A (en) * 2021-08-31 2021-11-30 上海东普信息科技有限公司 Helmet wearing state detection method, device, equipment and storage medium
CN114821737A (en) * 2022-05-13 2022-07-29 浙江工商大学 Moving end real-time wig try-on method based on three-dimensional face alignment
CN114973080A (en) * 2022-05-18 2022-08-30 深圳能源环保股份有限公司 Method, device, equipment and storage medium for detecting wearing of safety helmet
CN114998830A (en) * 2022-05-20 2022-09-02 济南信通达电气科技有限公司 Wearing detection method and system for safety helmet of transformer substation personnel

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
姜晓佳 等: "基于卷积神经网络的毛发显微图像分类", 激光杂志, vol. 40, no. 05, pages 66 - 72 *

Also Published As

Publication number Publication date
CN116630350B (en) 2023-10-03

Similar Documents

Publication Publication Date Title
CN109190540B (en) Biopsy region prediction method, image recognition device, and storage medium
CN111178197B (en) Mass R-CNN and Soft-NMS fusion based group-fed adherent pig example segmentation method
US9576359B2 (en) Context based algorithmic framework for identifying and classifying embedded images of follicle units
US9626462B2 (en) Detecting tooth wear using intra-oral 3D scans
CN112288706A (en) Automatic chromosome karyotype analysis and abnormality detection method
JP2021536057A (en) Lesion detection and positioning methods, devices, devices, and storage media for medical images
CN106547356B (en) Intelligent interaction method and device
CN109002846B (en) Image recognition method, device and storage medium
JP2017016593A (en) Image processing apparatus, image processing method, and program
CN113159227A (en) Acne image recognition method, system and device based on neural network
KR102356465B1 (en) Method and server for face registration and face analysis
CN111340937A (en) Brain tumor medical image three-dimensional reconstruction display interaction method and system
CN111768418A (en) Image segmentation method and device and training method of image segmentation model
CN111829661A (en) Forehead temperature measurement method and system based on face analysis
CN116229189B (en) Image processing method, device, equipment and storage medium based on fluorescence endoscope
CN111428552A (en) Black eye recognition method and device, computer equipment and storage medium
CN115880266B (en) Intestinal polyp detection system and method based on deep learning
CN113780145A (en) Sperm morphology detection method, sperm morphology detection device, computer equipment and storage medium
CN116386120A (en) Noninductive monitoring management system
CN116664585B (en) Scalp health condition detection method and related device based on deep learning
Zhang et al. TPMv2: An end-to-end tomato pose method based on 3D key points detection
CN116630350B (en) Wig wearing monitoring management method and system
CN111275754B (en) Face acne mark proportion calculation method based on deep learning
CN112613425A (en) Target identification method and system for small sample underwater image
CN117133014A (en) Live pig face key point detection method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant