CN116469150A - Face recognition model training method, system and medium for container cloud platform - Google Patents

Face recognition model training method, system and medium for container cloud platform Download PDF

Info

Publication number
CN116469150A
CN116469150A CN202310513052.8A CN202310513052A CN116469150A CN 116469150 A CN116469150 A CN 116469150A CN 202310513052 A CN202310513052 A CN 202310513052A CN 116469150 A CN116469150 A CN 116469150A
Authority
CN
China
Prior art keywords
feature extraction
neural network
network
face recognition
inputting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310513052.8A
Other languages
Chinese (zh)
Inventor
李参宏
韩平军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Netmarch Technologies Co ltd
Original Assignee
Jiangsu Netmarch Technologies Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Netmarch Technologies Co ltd filed Critical Jiangsu Netmarch Technologies Co ltd
Priority to CN202310513052.8A priority Critical patent/CN116469150A/en
Publication of CN116469150A publication Critical patent/CN116469150A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Human Computer Interaction (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a face recognition model training method, a face recognition model training system and a face recognition model training medium for a container cloud platform, wherein the method comprises the following steps: inputting the face image into an ODLOF feature extraction network for feature extraction, and inputting the extracted features into a first convolution neural network for deep convolution feature extraction; inputting the face image into an ODHBOS feature extraction network for feature extraction, and inputting the extracted features into a second convolutional neural network for deep convolutional feature extraction; inputting the face image into an ODMCD feature extraction network for feature extraction, and inputting the extracted features into a third convolutional neural network for deep convolutional feature extraction; and inputting the output characteristics of the first convolutional neural network, the second convolutional neural network and the third convolutional neural network into the characteristic fusion neural network for characteristic fusion, and classifying by using a classifier. The invention adopts a multi-feature extraction mode, improves the performance of the model for face recognition, and further improves the classification precision of the classifier.

Description

Face recognition model training method, system and medium for container cloud platform
Technical Field
The invention relates to the technical field of big data, in particular to a face recognition model training method, a face recognition model training system and a face recognition model training medium for a container cloud platform.
Background
In recent years, a great deal of excellent open source software has emerged around the container virtualization technology, and container technologies represented by Kubernetes and Docker are widely applied to the fields of DevOps and microservices. However, in addition to the two fields, the enterprise has requirements of big data analysis, database, machine learning and the like, and in the past, because the limitation of the underlying technology requires independent physical cluster deployment, a plurality of businesses cannot be fused efficiently, business islands and data islands are easy to form, and the cluster utilization rate and the business load change are also relatively disadvantageous. Based on the above consideration, the integration of the mainstream service in the enterprise on the container cloud platform further creates a more general, safe and efficient enterprise container cloud platform, which becomes the technical direction of most enterprises. The face recognition system needs larger calculation power and storage space, so that the face to be recognized is generally transmitted to the container cloud platform for recognition through a network, but the face recognition system is difficult to meet the requirements of practical application due to the problems of network coverage, congestion or delay and the like, and the user experience is poor.
Bian Yun and the cooperative proposal can well solve the problem of poor real-time performance of the face recognition system, the cloud can independently operate under the condition of good network, the edge can independently operate without being constrained by the network state, and the cooperative method ensures that the face recognition system can normally operate under any network condition. However, the computing resources of the edge device are limited, and when a complex face recognition reasoning process is performed at the edge terminal device, the edge computation may increase delay; the storage resources of the edge device are limited, and if the face image is directly stored in the edge device database, the occupied memory is too large, and the storage capacity of the edge device may be exceeded. The cloud computing has strong processing capability and large storage space, but has poor real-time performance, and the face recognition system is constrained by the network state only by adopting the cloud computing mode; the real-time performance of the edge calculation is good, but the calculation and storage resources of the edge equipment are limited.
Therefore, there is a need to propose a method, a system and a medium for training a face recognition model for a container cloud platform to overcome the above problems.
Disclosure of Invention
Aiming at the technical problems, the invention provides a face recognition model training method, a face recognition model training system and a face recognition model training medium for a container cloud platform.
In a first embodiment of the present invention, a method for training a face recognition model for a container cloud platform is provided, the method comprising the following steps:
inputting the face image into an ODLOF feature extraction network for feature extraction, and inputting the extracted features into a first convolution neural network for deep convolution feature extraction;
inputting the face image into an ODHBOS feature extraction network for feature extraction, and inputting the extracted features into a second convolutional neural network for deep convolutional feature extraction;
inputting the face image into an ODMCD feature extraction network for feature extraction, and inputting the extracted features into a third convolutional neural network for deep convolutional feature extraction;
and inputting the output characteristics of the first convolutional neural network, the second convolutional neural network and the third convolutional neural network into the characteristic fusion neural network for characteristic fusion, and classifying by using a classifier.
Optionally, the classifier is a Softmax classifier.
Optionally, the method further comprises:
and collecting a face image training set, wherein the face image training set is a data set with a label.
Optionally, the method for extracting the characteristics by the ODLOF characteristic extraction network includes:
the ODLOF feature extraction network defines the ratio of the local average density of other samples around a certain sample in the input face image to the local density of the ODLOF feature extraction network as a Lof value, and judges the Lof value of each sample; if Lof is far greater than 1, the sample is an abnormal sample; if Lof is close to 1, the sample is a normal sample.
Optionally, the method for extracting the characteristics by the ODHBOS characteristic extraction network includes:
and generating corresponding histograms aiming at each dimension of the data in the dataset, wherein the height of the corresponding histogram of each dimension represents the density of the corresponding data, carrying out normalization processing to ensure the weight consistency of each feature, and judging whether the data is abnormal or not by calculating the HBOS value of the sample.
Alternatively, for sample X, its HBOS value is formulated as follows:
wherein,,the probability density of the ith feature for a certain sample X.
Optionally, the method for extracting the features by the ODMCD feature extraction network includes: and acquiring covariance matrix estimation values through an iteration method, calculating the mahalanobis distance between each sample and a final mean matrix, and judging whether the sample is abnormal or not through the mahalanobis distance.
The second embodiment of the invention provides a face recognition model training system for a container cloud platform, which comprises an input module, an ODLOF feature extraction network, an ODHBOS feature extraction network, an ODMCD feature extraction network, a first convolution neural network, a second convolution neural network and a third convolution neural network, which are connected with the ODLOF feature extraction network, the ODHBOS feature extraction network and the ODMCD feature extraction network and used for deep convolution feature extraction, and a feature fusion neural network, wherein the feature fusion neural network is used for carrying out feature fusion on output features of the first convolution neural network, the second convolution neural network and the third convolution neural network, and classifying by using a classifier.
Optionally, the classifier is a Softmax classifier.
A third embodiment of the present invention provides a computer-readable storage medium storing computer-executable instructions that, when executed by one or more processors, cause the one or more processors to perform the container cloud platform-oriented face recognition model training method described in any one of the above.
In the technical scheme provided by the invention, firstly, a face image is input into an ODLOF feature extraction network to extract features, and then the extracted features are input into a first convolutional neural network to extract deep convolutional features; then, inputting the face image into an ODHBOS feature extraction network to extract features, and inputting the extracted features into a second convolutional neural network to extract deep convolutional features; then, inputting the face image into an ODMCD feature extraction network for feature extraction, and then inputting the extracted features into a third convolutional neural network for deep convolutional feature extraction; and then, inputting the output characteristics of the first, second and third convolutional neural networks into a characteristic fusion neural network, and classifying by using a Softmax classifier after characteristic fusion. Compared with the prior art, in the method provided by the invention, the performance of the model for face recognition is improved by adopting a multi-feature extraction mode, so that the classification precision of the classifier is improved. Meanwhile, the algorithm framework provided by the invention has higher robustness and generalization capability.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flow chart of a face recognition model training method for a container cloud platform.
Fig. 2 is a schematic structural diagram of a face recognition model training system facing a container cloud platform.
Detailed Description
In order to further describe the technical means and effects adopted by the present invention to achieve the preset purpose, the following detailed description refers to specific implementation, structure, characteristics and effects of a multi-class data intelligent caching method according to the present invention with reference to the accompanying drawings and preferred embodiments. In the following description, different "one embodiment" or "another embodiment" means that the embodiments are not necessarily the same. Furthermore, the particular features, structures, or characteristics of one or more embodiments may be combined in any suitable manner.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The following specifically describes a specific scheme of the multi-class data intelligent caching method provided by the invention with reference to the accompanying drawings.
The invention provides a face recognition model training method oriented to a container cloud platform, referring to fig. 1, the method comprises the following steps:
step S10, inputting the face image into an ODLOF feature extraction network for feature extraction, and inputting the extracted features into a first convolutional neural network for deep convolutional feature extraction.
In one embodiment of the present invention, the method further comprises:
and collecting a face image training set, wherein the face image training set is a data set with a label.
Specifically, the network is trained by using a labeled dataset, which is a training sample.
Specifically, the method for extracting the characteristics by the ODLOF characteristic extraction network comprises the following steps:
the ODLOF feature extraction network defines a ratio of local average density of other samples around a certain sample in the input face image to local density of itself as a Lof value. Whether it is abnormal or not is judged by the size of Lof value of each sample. Wherein if the Lof value is much greater than 1, then the sample is considered highly likely to be anomalous; conversely, if the Lof value is close to 1, the sample may be normal and the closer to 1 the higher the likelihood.
Assuming that a point O exists, the nearest neighbor of the point O is a point P, and the distance between two points is d (O, P); the Kth distance of the point O isLet the distance point O be less than or equal to +.>Constitutes a set of points +.>The method comprises the steps of carrying out a first treatment on the surface of the Let O, P reach distance of two points +.>And define, if->Then->OtherwiseThe method comprises the steps of carrying out a first treatment on the surface of the Is provided with->The average achievable density for a point O and all points in its K-th domain. Local outlier factor->Representation->The ratio of the local average reachable density of other points in the graph to the local reachable density of point O is shown as follows:
here, the local anomaly factor matrixIs defined as the formula, wherein->Representing any one sample:
by extracting the matrix, new characteristics are obtained
And S20, inputting the face image into an ODHBOS feature extraction network for feature extraction, and inputting the extracted features into a second convolutional neural network for deep convolutional feature extraction.
On the premise that each dimension of the data is independent, the ODHBOS generates corresponding histograms for each dimension of the data, the height of each corresponding histogram of each dimension represents the density of the corresponding data, and the normalization processing is carried out to ensure the weight consistency of each feature. By calculating the HBOS value of the sample, it is determined whether or not it is abnormal, and the higher the HBOS is, the more likely it is an abnormal point.
Assuming that the probability density of the ith feature of a certain sample X isThe probability density is shown as follows:
for sample X, its HBOS value is defined as follows:
the HBOS value of each sample is spliced into a matrix, and new characteristics are obtained through transpositionThe specific formula is as follows:
and step S30, inputting the face image into an ODMCD feature extraction network for feature extraction, and inputting the extracted features into a third convolutional neural network for deep convolutional feature extraction.
In one embodiment of the present invention, the feature extraction method of the ODMCD network includes:
ODMCD is a robust algorithm, estimation is performed through position and distribution, firstly, a relatively stable covariance matrix estimation amount is obtained through an iteration method, then, for each sample, the Markov distance between the sample and a final mean matrix is calculated, and finally, whether the sample is abnormal or not is judged through the Markov distance.
Randomly selecting a specified number of samples from n samples, setting the samples as m, calculating the mean value and variance of all the characteristics of the samples to obtain a mean matrix T and a covariance matrix S, and then respectively calculating the mahalanobis distance from each sample to T, wherein the specific formula is as follows:
next, selecting m samples with the minimum value according to the Markov distance, recalculating a new mean matrix and covariance matrix, iterating until the rose is received, and obtaining a final mean matrixAnd covariance matrix->. By calculating each sample to +.>Mahalanobis distance, new characteristics are extracted>The definition of the catalyst is shown in the following formula:
in one embodiment of the present invention, a feature extraction method of a convolutional neural network includes:
for the first convolutional neural network, the second convolutional neural network and the third convolutional neural network, the same network structure and parameters are adopted, the number of convolutional layers is 3, the step length of the convolutional kernel is 3, and the sizes of the convolutional networks of each layer are respectively set to be 3 multiplied by 3, 6 multiplied by 6 and 9 multiplied by 9.
And S40, inputting the output characteristics of the first convolutional neural network, the second convolutional neural network and the third convolutional neural network into a characteristic fusion neural network for characteristic fusion, and classifying by using a classifier.
The classifier is a Softmax classifier.
The outputs of the convolutional neural networks 1, 2 and 3 are input into the fully-connected neural network for feature fusion. The number of layers of the fully-connected neural network is 3, the number of neurons of the first layer network is the same as the number of pixels of the input features, the number of neurons of the second layer network is 500, the number of neurons of the third layer network is 200, and finally the neurons of the third layer network are input into a Softmax classification for classification.
And after model training is completed, classifying face recognition tasks by using the trained models.
The invention provides a container cloud platform-oriented face recognition model training method, which mainly comprises a face recognition method based on cloud computing and a face recognition method based on edge computing. When the network condition is good, the face recognition system terminal equipment transmits the acquired face image to the container cloud platform through the Internet, and operations such as face image preprocessing, feature extraction and the like are completed on the container cloud platform, so that a face recognition result is obtained and returned to the terminal equipment for display; when the terminal equipment does not receive the response of the container cloud platform within a specified time, namely the network is disconnected, failed, jammed or delayed, the operations such as face image preprocessing, feature extraction and the like are performed on the terminal equipment, and the result is displayed on the terminal.
The invention also provides a face recognition model training system for the container cloud platform, as shown in fig. 2, which comprises an input module, an ODLOF feature extraction network, an ODHBOS feature extraction network, an ODMCD feature extraction network, a first convolutional neural network, a second convolutional neural network and a third convolutional neural network, which are connected with the ODLOF feature extraction network, the ODHBOS feature extraction network and the ODMCD feature extraction network respectively and used for performing deep convolutional feature extraction, and a feature fusion neural network, which is used for carrying out feature fusion on the output features of the first convolutional neural network, the second convolutional neural network and the third convolutional neural network, and classifying by using a classifier.
Specifically, the classifier is a Softmax classifier.
Embodiments of the present invention also provide a medium that is a computer-readable storage medium storing computer-executable instructions that are executed by one or more processors, for example, to perform the method steps S10 through S40 in fig. 1 described above.
In particular, the computer-readable storage medium can include read-only memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically erasable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM), which acts as external cache memory. By way of illustration and not limitation, RAM may be available in many forms such as Synchronous RAM (SRAM), dynamic RAM, (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), synchlink DRAM (SLDRAM), and Direct Rambus RAM (DRRAM). The disclosed memory components or memories of the operating environment described in embodiments of the present invention are intended to comprise one or more of these and/or any other suitable types of memory.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather is intended to cover all modifications, equivalents, alternatives, and improvements that fall within the spirit and scope of the invention.

Claims (10)

1. The face recognition model training method for the container cloud platform is characterized by comprising the following steps of:
inputting the face image into an ODLOF feature extraction network for feature extraction, and inputting the extracted features into a first convolution neural network for deep convolution feature extraction;
inputting the face image into an ODHBOS feature extraction network for feature extraction, and inputting the extracted features into a second convolutional neural network for deep convolutional feature extraction;
inputting the face image into an ODMCD feature extraction network for feature extraction, and inputting the extracted features into a third convolutional neural network for deep convolutional feature extraction;
and inputting the output characteristics of the first convolutional neural network, the second convolutional neural network and the third convolutional neural network into the characteristic fusion neural network for characteristic fusion, and classifying by using a classifier.
2. The container cloud platform-oriented face recognition model training method of claim 1, wherein the classifier is a Softmax classifier.
3. The container cloud platform-oriented face recognition model training method of claim 2, further comprising:
and collecting a face image training set, wherein the face image training set is a data set with a label.
4. The container cloud platform-oriented face recognition model training method of claim 3, wherein the ODLOF feature extraction network feature extraction method comprises:
the ODLOF feature extraction network defines the ratio of the local average density of other samples around a certain sample in the input face image to the local density of the ODLOF feature extraction network as a Lof value, and judges the Lof value of each sample;
if Lof is far greater than 1, the sample is an abnormal sample;
if Lof is close to 1, the sample is a normal sample.
5. The container cloud platform-oriented face recognition model training method of claim 4, wherein the method for feature extraction by the ODHBOS feature extraction network comprises:
and generating corresponding histograms for each dimension of the data in the dataset, wherein the height of the corresponding histogram of each dimension represents the density of the corresponding data, carrying out normalization processing to ensure the weight consistency of each feature, and judging whether the sample is abnormal or not by calculating the HBOS value of the sample.
6. The container cloud platform oriented face recognition model training method of claim 5, wherein for sample X, its HBOS value formula is as follows:
wherein,,the probability density of the ith feature for a certain sample X.
7. The container cloud platform-oriented face recognition model training method of claim 3, wherein the ODMCD feature extraction network feature extraction method comprises:
and acquiring covariance matrix estimation values through an iteration method, calculating the mahalanobis distance between each sample and a final mean matrix, and judging whether the sample is abnormal or not through the mahalanobis distance.
8. The face recognition model training system for the container cloud platform is characterized by comprising an input module, an ODLOF feature extraction network, an ODHBOS feature extraction network, an ODMCD feature extraction network, a first convolution neural network, a second convolution neural network and a third convolution neural network, wherein the first convolution neural network, the second convolution neural network and the third convolution neural network are connected with the ODLOF feature extraction network, the ODHBOS feature extraction network and the ODMCD feature extraction network respectively and used for performing deep convolution feature extraction, the feature fusion neural network is used for inputting output features of the first convolution neural network, the second convolution neural network and the third convolution neural network into the feature fusion neural network for feature fusion, and classifying by using a classifier.
9. The container cloud platform oriented face recognition model training system of claim 8, wherein said classifier is a Softmax classifier.
10. A medium, characterized in that the medium is a computer-readable storage medium storing computer-executable instructions that, when executed by one or more processors, cause the one or more processors to perform the container cloud platform oriented face recognition model training method of any one of claims 1-7.
CN202310513052.8A 2023-05-09 2023-05-09 Face recognition model training method, system and medium for container cloud platform Pending CN116469150A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310513052.8A CN116469150A (en) 2023-05-09 2023-05-09 Face recognition model training method, system and medium for container cloud platform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310513052.8A CN116469150A (en) 2023-05-09 2023-05-09 Face recognition model training method, system and medium for container cloud platform

Publications (1)

Publication Number Publication Date
CN116469150A true CN116469150A (en) 2023-07-21

Family

ID=87175414

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310513052.8A Pending CN116469150A (en) 2023-05-09 2023-05-09 Face recognition model training method, system and medium for container cloud platform

Country Status (1)

Country Link
CN (1) CN116469150A (en)

Similar Documents

Publication Publication Date Title
US11107250B2 (en) Computer architecture for artificial image generation using auto-encoder
US11042802B2 (en) System and method for hierarchically building predictive analytic models on a dataset
WO2021184902A1 (en) Image classification method and apparatus, training method and apparatus, device, and medium
EP4163831A1 (en) Neural network distillation method and device
US20210215818A1 (en) Generative adversarial network-based target identification
US20230401446A1 (en) Convolutional neural network pruning processing method, data processing method, and device
WO2018212711A1 (en) Predictive analysis methods and systems
EP4068160A1 (en) Neural network training and face detection method and apparatus, and device and storage medium
CN113657421B (en) Convolutional neural network compression method and device, and image classification method and device
US11195053B2 (en) Computer architecture for artificial image generation
CN115564983A (en) Target detection method and device, electronic equipment, storage medium and application thereof
WO2023178793A1 (en) Method and apparatus for training dual-perspective graph neural network model, device, and medium
CN114241234A (en) Fine-grained image classification method, device, equipment and medium
CN113674152A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
WO2021169363A1 (en) Optimization method for text recognition system, computer device, and storage medium
CN117423344A (en) Voiceprint recognition method and device based on neural network
CN116469150A (en) Face recognition model training method, system and medium for container cloud platform
WO2020190951A1 (en) Neural network trained by homographic augmentation
Bernecker et al. Representation learning for cloud classification
CN116543250A (en) Model compression method based on class attention transmission
CN115169548A (en) Tensor-based continuous learning method and device
CN115049121A (en) Bank customer loss risk prediction model generation method, device, equipment and medium
CN110929118B (en) Network data processing method, device, apparatus and medium
Sumit et al. ReSTiNet: An Efficient Deep Learning Approach to Improve Human Detection Accuracy
CN113362167B (en) Credit risk assessment method, computer system and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination