WO2021164283A1 - 基于语义分割的服装颜色识别方法、装置和*** - Google Patents

基于语义分割的服装颜色识别方法、装置和*** Download PDF

Info

Publication number
WO2021164283A1
WO2021164283A1 PCT/CN2020/121515 CN2020121515W WO2021164283A1 WO 2021164283 A1 WO2021164283 A1 WO 2021164283A1 CN 2020121515 W CN2020121515 W CN 2020121515W WO 2021164283 A1 WO2021164283 A1 WO 2021164283A1
Authority
WO
WIPO (PCT)
Prior art keywords
clothing
pictures
color recognition
portrait
clothing color
Prior art date
Application number
PCT/CN2020/121515
Other languages
English (en)
French (fr)
Inventor
王江鹏
毛晓蛟
赵文忠
章勇
曹李军
Original Assignee
苏州科达科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 苏州科达科技股份有限公司 filed Critical 苏州科达科技股份有限公司
Publication of WO2021164283A1 publication Critical patent/WO2021164283A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Definitions

  • the present invention relates to the technical field of image processing, in particular to a clothing color recognition method, device and system based on semantic segmentation.
  • the color recognition of clothing generally directly classifies the color of the upper and lower clothing after inputting the entire portrait into the classifier.
  • this method is not suitable for situations where the portrait is incomplete due to the shooting angle, occlusion, etc., or the pose of the portrait is complicated. The problem is difficult to judge.
  • the industry also has multi-label classification based on graph neural networks.
  • the reason for this is that the relationship between the color of the upper garment, the color of the lower garment and other attributes of the human body can be used. There is no obvious relationship between clothing style, clothing length and other attributes, leading to a certain degree of improvement in the recognition of other attributes, but it has no positive effect on the recognition of clothing color.
  • the industry also has a method based on the attention mechanism.
  • This method uses attention technology to initially locate the position of the top and bottom clothes, and then make color judgments based on these positioning features, but because the human body is not a shape that does not change Rigid body, due to its own posture changes, will cause occlusion, distortion and other phenomena.
  • video surveillance scenes cannot fully capture everyone's full body shots. The attention mechanism will be when facing these complex scenes. The problem of inaccurate positioning will result in a direct result that will affect the final classification result and cause recognition errors.
  • the existing invention patent discloses a method for segmentation of fashion clothing images based on deep learning, which can identify the semantic information of upper body clothing and lower body clothing from complex scenes, and input the source image into deep learning designed specifically for the field of fashion clothing.
  • the network is trained to automatically recognize upper body clothing, lower body clothing, and full body clothing collocations in the image.
  • the clothing key point information and key point visualization information are used to pool the global features of the image input by the image feature extraction module around the key point positions to obtain local features, which are related to the deformation of the clothing. Occlusion is irrelevant, so the recognition and segmentation accuracy of clothing is greatly improved.
  • the clothing key point information includes coordinate point information of various clothing, for example, for upper body clothing, there are coordinate points such as left collar, right collar, left sleeve, right sleeve, left hem, and lower right corner.
  • the purpose of the present invention is to provide a clothing color recognition method, device and system based on semantic segmentation, which uses a large number of labeled semantic segmentation data to perform detailed segmentation on clothing contained in portrait pictures, and extract each of the clothing contained in it.
  • the partial pictures of the clothing area are processed to form a new picture and sent to the classifier for color recognition.
  • the present invention can achieve a higher recognition success rate and recognition accuracy rate based on the same data set, especially for complex situations such as busts and occlusions in the picture, the recognition success rate is lower.
  • the recognition accuracy rate is improved even higher. For example, in the case of a bust, the present invention can achieve almost complete judgment accuracy.
  • the present invention proposes a clothing color recognition method based on semantic segmentation.
  • the clothing color recognition method includes the following steps:
  • S1 Collect a certain amount of portrait pictures with different parameters in different scenes, and annotate the collected portrait pictures.
  • the annotation content includes clothing region segmentation information annotation and human body joint point information annotation; random parameter transformation processing is performed on the labeled portrait image to generate Initial sample set;
  • S2 Create a clothing region extraction model based on the JPPNet network, the clothing region extraction model is used to combine the body joint point information annotation and the clothing region segmentation information annotation to extract the clothing region on the portrait picture; import the portrait picture in the initial sample set into the clothing The region extraction model extracts the partial pictures of each clothing region contained in the portrait pictures;
  • S3 Perform background transformation and color classification and labeling on the extracted partial images of each clothing area; perform size unification and random parameter transformation processing on the labeled partial images to generate a training sample set;
  • S4 Create a clothing color recognition model based on the classifier, and import the training sample set into the clothing color recognition model to train the clothing color recognition model;
  • S5 Collect portrait pictures containing clothing information in real time, and use a clothing region extraction model and a clothing color recognition model to recognize the clothing colors of one or more clothing regions in the portrait picture.
  • the parameters of the portrait picture include shooting parameters and human body posture parameters
  • the shooting parameters include lighting conditions, shooting scenes, shooting angles, and shooting distances;
  • the human body posture parameters include human body posture, full-body close-up, and half-body close-up.
  • the clothing region segmentation information label includes the information labels of the head, upper garment, lower garment, limbs, and foot regions;
  • the human body joint point information labeling includes the information labeling of the human body's wrist, elbow, shoulder, head, chest cavity, knee joint, and ankle joint point.
  • the random parameter transformation processing refers to random cropping, rotation, flipping, and color transformation processing on the picture.
  • step S3 performing background transformation on the extracted partial pictures of each clothing region refers to unifying the background regions in the extracted partial pictures of each clothing region into a pure white background.
  • step S3 a bilinear interpolation method is used to unify the marked partial pictures to the same size.
  • step S5 using the clothing region extraction model and the clothing color recognition model to recognize the clothing color of one or more clothing regions in the portrait picture includes the following steps:
  • S51 Collect portrait pictures containing clothing information in real time, and import the portrait pictures into the clothing region extraction model to extract partial pictures of each clothing region contained in the portrait pictures;
  • S52 Perform size unification and background transformation processing on the extracted partial pictures of each clothing area, and import the processed partial pictures of each clothing area into the clothing color recognition model to identify the corresponding clothing color.
  • the present invention proposes a clothing color recognition device based on semantic segmentation.
  • the clothing color recognition device includes:
  • the clothing region extraction model created based on the JPPNet network is used to extract the clothing region on the portrait image combined with the annotation of the human joint point information and the annotation of the clothing region segmentation information;
  • the clothing color recognition model created based on the classifier is used to recognize the clothing color in the imported partial images of each clothing area;
  • Portrait picture collection module used to collect portrait pictures with different parameters in different scenarios
  • the sample set generation module is used to perform random parameter transformation processing on the imported pictures to generate corresponding training picture sample sets;
  • the image preprocessing module is used to perform background transformation and uniform size processing on the imported pictures.
  • the present invention proposes a clothing color recognition system based on semantic segmentation.
  • the clothing color recognition system includes a memory, a processor, and a computer stored in the memory and running on the processor. program;
  • the clothing contained in the portrait image is segmented more accurately and meticulously, so as to extract the partial image of each clothing area it contains, due to the introduction of human body joint point information Label, the extracted partial pictures of each clothing area are no longer limited by factors such as incompleteness or shooting direction, and then process the partial pictures of each clothing area to form a new picture and send it to the classifier for color recognition and zoom out
  • the color recognition range is increased, and the color recognition efficiency is improved; compared with other clothing color recognition methods, the present invention can achieve a higher recognition success rate and recognition accuracy rate based on the same data set.
  • the original background area of the generated partial picture is transformed into a uniform background (for example, a pure white background), so as to avoid the problem of background interference.
  • Fig. 1 is a flowchart of a clothing color recognition method based on semantic segmentation of the present invention.
  • FIG. 2 is a diagram of the clothing color recognition steps of the present invention.
  • Fig. 3 is a schematic diagram of a specific identification scene of the present invention.
  • the present invention proposes a clothing color recognition method based on semantic segmentation.
  • the clothing color recognition method includes the following steps:
  • S1 Collect a certain amount of portrait pictures with different parameters in different scenes, and annotate the collected portrait pictures.
  • the annotation content includes clothing region segmentation information annotation and human body joint point information annotation; random parameter transformation processing is performed on the labeled portrait image to generate Initial sample set.
  • the clothing region extraction model is used to combine human body joint point information annotation and clothing region segmentation information annotation to extract the clothing region on the portrait image;
  • the portrait pictures in the sample set are imported into the clothing region extraction model, and the partial pictures of each clothing region contained in the portrait pictures are extracted from the portrait pictures.
  • the JPPNet network is a deep learning method based on Tensorflow-based human body analysis and posture estimation that is often used in the prior art.
  • S3 Perform background transformation and color classification and labeling on the extracted partial pictures of each clothing region; perform size unification and random parameter transformation processing on the labeled partial pictures to generate a training sample set.
  • S4 Create a clothing color recognition model based on the classifier, and import the training sample set into the clothing color recognition model to train the clothing color recognition model.
  • S5 Collect portrait pictures containing clothing information in real time, and use a clothing region extraction model and a clothing color recognition model to recognize the clothing colors of one or more clothing regions in the portrait picture.
  • Step 1 Generate the initial sample set
  • the labeling includes two aspects: the first aspect is the segmentation and labeling of the head, upper garment, lower garment, limbs and other areas; the second aspect is the human body’s wrists, elbows, shoulders, head, chest, knee joints , Ankle and other 15 joint points.
  • the aforementioned annotation data is the data basis for extracting the partial pictures of each clothing region contained in the portrait pictures by using the JPPNet network in the second step.
  • different garments such as human body coats, tops, and jumpsuits can be distinguished by labeling. Since the number of collected portrait images is limited, the larger the training sample data imported into the clothing color recognition model, the more types, the higher the robustness and recognition rate of the clothing color recognition model generated by the final training.
  • the present invention proposes to perform random parameter transformation processing (such as random cropping, rotation, flipping, color transformation, etc. processing on the picture) on the labeled portrait picture to generate an initial sample set.
  • Step 2 Extract a partial picture of each clothing area contained in the portrait picture
  • the type of clothing area that is finally extracted is determined by the user according to actual needs, for example, only partial pictures containing upper and lower clothes are extracted.
  • the clothing region extraction model adopts JPPNet, which assists the segmentation of different regions of the human body by using human body joint points. Thanks to this assistance, compared with common semantic segmentation models, the clothing region extraction model can greatly reduce the mis-segmentation, which can significantly enhance the generalization ability of the entire model.
  • Step 3 Generate a training sample set
  • the present invention proposes to replace the original background area with a pure white background to avoid the problem of background interference.
  • the present invention proposes to perform random parameter transformation processing on the marked partial pictures (such as random cropping, rotation, flipping, color transformation, etc.) for the pictures. Increase the number of training samples as much as possible to generate a training sample set.
  • Step 4 Create and train a clothing color recognition model
  • the training sample set generated in step 3 is imported into the classifier for color recognition to complete the training of the clothing color recognition model.
  • the training sample set can be divided into a training set and a test set according to a set ratio.
  • the training set is used to train the clothing color recognition model, and then the test set is used to verify the clothing color recognition model (such as judgment and recognition). Whether the success rate and recognition accuracy rate meet the preset requirements), the training is completed if the verification is passed, otherwise, the model parameters are adjusted to retrain the model until the verification is passed.
  • step S5 the use of the clothing region extraction model and the clothing color recognition model to recognize the clothing color of one or more clothing regions in the portrait picture includes the following step:
  • S51 Collect portrait pictures containing clothing information in real time, and import the portrait pictures into the clothing region extraction model to extract partial pictures of each clothing region included in the portrait pictures.
  • S52 Perform size unification and background transformation processing on the extracted partial pictures of each clothing area, and import the processed partial pictures of each clothing area into the clothing color recognition model to identify the corresponding clothing color.
  • the present invention analyzes the corresponding relationship between the human body joint points and the clothing area through a large number of samples, and uses the human body joint points to accurately extract each clothing area in the actual clothing recognition process. For example, in a bust photo that lacks a complete bottoms picture, the incomplete bottoms picture can be extracted from the entire portrait picture by combining the limbs, knee joints, hips, etc., and then the extracted incomplete bottoms picture Background processing and color recognition.
  • the clothing color recognition method mentioned in the present invention can effectively deal with the aforementioned complex situations, and accurately extract different clothing areas, such as the upper clothing area and the lower clothing area, based on the human body joint point information in the picture, and then extract from the upper clothing area
  • the top is taken out, the bottom is extracted from the bottom area, the background color of the top and bottom is changed, and the top or bottom part is more prominent.
  • the present invention proposes an auxiliary segmentation method using human joint points to avoid this situation to a certain extent.
  • the present invention proposes a clothing color recognition device based on semantic segmentation.
  • the clothing color recognition device includes:
  • the clothing region extraction model created based on the JPPNet network is used to combine human body joint point information annotation and clothing region segmentation information annotation to extract clothing regions on portrait pictures.
  • the clothing color recognition model created based on the classifier is used to recognize the clothing color in the imported partial pictures of each clothing area.
  • the sample set generation module is used to perform random parameter transformation processing on the imported pictures to generate a corresponding training picture sample set.
  • the image preprocessing module is used to perform background transformation and uniform size processing on the imported pictures.
  • the present invention proposes a clothing color recognition system based on semantic segmentation.
  • the clothing color recognition system includes a memory, a processor, and a computer stored in the memory and running on the processor. program.
  • the processor executes the computer program, the steps of the aforementioned clothing color recognition method are realized.
  • the foregoing program can be stored in a computer readable storage medium.
  • the execution includes The steps of the foregoing method embodiment; and the foregoing storage medium includes: ROM, RAM, magnetic disk, or optical disk and other media that can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

一种基于语义分割的服装颜色识别方法,包括:采集一定量不同场景下不同参数的人像图片,对采集的人像图片进行标注;对标注好的人像图片进行随机参数变换处理,生成初始样本集(S1);基于JPPNet网络创建服装区域提取模型,从人像图片中提取出各自所包含的每个服装区域的局部图片(S2);对提取出的局部图片进行背景变换和颜色分类标注;对标注好的局部图片进行尺寸统一和随机参数变换处理,生成训练样本集(S3);基于分类器创建服装颜色识别模型,将训练样本集导入服装颜色识别模型以对服装颜色识别模型进行训练(S4)。上述方案能够达到更高的识别成功率和识别准确率,尤其是对于图片中只有半身像、遮挡等复杂的情况,识别成功率和识别准确率的提神幅度更高。

Description

基于语义分割的服装颜色识别方法、装置和***
本申请要求申请号为:CN202010098415.2、申请日为2020.02.18的中国国家知识产权局的在先专利申请为优先权,该在先专利申请文本中的内容通过引用而完全加入本专利申请中。
技术领域
本发明涉及图像处理技术领域,具体而言涉及一种基于语义分割的服装颜色识别方法、装置和***。
背景技术
对服装颜色的识别一般通过将整个人像输入分类器之后直接对上下衣颜色进行分类,但此种方法对由于拍摄的角度、遮挡等问题造成的人像不全的情况、或者人像的姿态复杂的情况会的难以判断的问题。
除了直接用分类器判断之外,业界也有基于图神经网络做多标签分类,这样做的原因是可以利用上衣颜色、下衣颜色和人体其它属性之间的关系,但由于衣服颜色与人体其它诸如衣服款式、衣服长短等属性并不存在明显的联系,导致在其它属性的识别上可以有某种程度的提升,但对于衣服颜色的识别效果并没有什么正面的作用。
此外业界也有基于注意力机制的方法,此方法利用注意力技术,可以初步地定位上衣和下衣所在的位置,然后根据这些定位的特征来做颜色的判断,但由于人体并非一个形状不变的刚体,由于自身姿态发生变化,会造成遮挡、扭曲等现象发生,再加上视频监控场景并不能完全拍的到每个人的完全的全身照,注意力机制在面对这些复杂的场景的时候会出现定位不准确的问题,所导致的直接结果就是会影响最终的分类结果,产生识别错误的情况。
由于人体着装情况比较复杂,衣服的位置及区域会跟随所获得的人的图像的方式、遮挡的方式、人体姿态的变化、拍摄角度的变化等等发生很大的变化,以往的服装颜色识别方案,或者粗略定位服装的位置,或者根据人体关键点的位置,或者由整张图来直接判定。例如,现有发明专利中公开了一种基于深度学习的时尚服装图像分割方法,能够从复杂场景中识别出上身服装、下身服装的语义信息,将源图像输入专门针对时尚服装领域设计的深度学习网络进行训练,自动识别出图像中上身服装、下身服装,以及全身服装搭配。在服装局部特征提取模块中,使用服装关键点信息和关键点可视化信息对由图像特征提取模块输入的图像全局特征在关键点位置周围进行池化,得到局部特征,该局部特征与服装的变形与遮挡无关,因此极大地提升了服装的识别分割精度。其中,所述服装关键点信息包括各种服装的坐标点信息,例如对于上身服装有左领、右领、左袖、右袖、左下摆、右下角等坐标点。
但在实际应用中,这些方案都很难适应复杂情况的识别,例如在前述专利中,必须依赖服装的坐标点信息对服装区域进行定位,当某一服装区域的部分坐标点信息由于被遮挡或拍摄角度等原因导致丢失时,需要对丢失的坐标点进行拟合,除了因拟合过程计算量较大导致识别速度慢之外,拟合得到的坐标点的精度较低也使得实际应用中的最终识别效果大打折扣。
综上所述,上述技术在颜色判断上依旧存在诸如误判、漏判等等不少的问题,导致很多 时候颜色判定不准确,很难达到实用的目的。
发明内容
本发明目的在于提供一种基于语义分割的服装颜色识别方法、装置和***,采用大量标注的语义分割的数据,对人像图片中所包含的服装做细致的分割,提取出其所包含的每个服装区域的局部图片,再对每个服装区域的局部图片做处理后形成新图片送入分类器中进行颜色识别。相比其他服装颜色识别方法,本发明在相同的数据集基础上,能够达到更高的识别成功率和识别准确率,尤其是对于图片中只有半身像、遮挡等复杂的情况,识别成功率和识别准确率的提升幅度更高,例如,对于半身像的情况,本发明能够做到几乎完全判断准确。
为达成上述目的,结合图1,本发明提出一种基于语义分割的服装颜色识别方法,所述服装颜色识别方法包括以下步骤:
S1:采集一定量不同场景下不同参数的人像图片,对采集的人像图片进行标注,标注内容包括服装区域分割信息标注和人体关节点信息标注;对标注好的人像图片进行随机参数变换处理,生成初始样本集;
S2:基于JPPNet网络创建服装区域提取模型,所述服装区域提取模型用于结合人体关节点信息标注和服装区域分割信息标注对人像图片上的服装区域进行提取;将初始样本集中的人像图片导入服装区域提取模型,从人像图片中提取出各自所包含的每个服装区域的局部图片;
S3:对提取出的每个服装区域的局部图片进行背景变换和颜色分类标注;对标注好的局部图片进行尺寸统一和随机参数变换处理,生成训练样本集;
S4:基于分类器创建服装颜色识别模型,将训练样本集导入服装颜色识别模型以对服装颜色识别模型进行训练;
S5:实时采集包含有服装信息的人像图片,采用服装区域提取模型和服装颜色识别模型对人像图片中的一个或多个服装区域的服装颜色进行识别。
进一步的实施例中,步骤S1中,所述人像图片的参数包括拍摄参数和人体姿态参数;
所述拍摄参数包括光照条件、拍摄场景、拍摄角度、拍摄距离;
所述人体姿态参数包括人体姿态、全身特写、半身特写。
进一步的实施例中,步骤S1中,所述服装区域分割信息标注包括头部、上衣、下衣、四肢、脚部区域的信息标注;
所述人体关节点信息标注包括人体的手腕、肘部、肩部、头部、胸腔、膝关节、脚踝关节点的信息标注。
进一步的实施例中,所述随机参数变换处理是指,对图片进行随机裁剪、旋转、翻转、颜色变换处理。
进一步的实施例中,步骤S3中,所述对提取出的每个服装区域的局部图片进行背景变换是指,将提取出的每个服装区域的局部图片中的背景区域统一成纯白色背景。
进一步的实施例中,步骤S3中,采用双线性插值的方法将标注好的局部图片统一到相同的尺寸。
进一步的实施例中,步骤S5中,所述采用服装区域提取模型和服装颜色识别模型对人像图片中的一个或多个服装区域的服装颜色进行识别包括以下步骤:
S51:实时采集包含有服装信息的人像图片,将人像图片导入服装区域提取模型以提取出人像图片中包含的每个服装区域的局部图片;
S52:对提取出的每个服装区域的局部图片进行尺寸统一和背景变换处理,将处理后的每个服装区域的局部图片导入服装颜色识别模型以识别出对应的服装颜色。
基于前述服装颜色识别方法,本发明提出一种基于语义分割的服装颜色识别装置,所述服装颜色识别装置包括:
基于JPPNet网络创建的服装区域提取模型,用于结合人体关节点信息标注和服装区域分割信息标注对人像图片上的服装区域进行提取;
基于分类器创建的服装颜色识别模型,用于对导入的每个服装区域的局部图片中的服装颜色进行识别;
人像图片采集模块,用于采集不同场景下不同参数的人像图片;
样本集生成模块,用于对导入对图片进行随机参数变换处理,生成对应的训练图片样本集;
图像预处理模块,用于对导入的图片进行背景变换、尺寸统一处理。
基于前述服装颜色识别方法,本发明提出一种基于语义分割的服装颜色识别***,所述服装颜色识别***包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序;
所述处理器执行所述计算机程序时实现如前述的服装颜色识别方法的步骤。
以上本发明的技术方案,与现有相比,其显著的有益效果在于:
(1)采用大量标注的语义分割的数据,对人像图片中所包含的服装做更加精准细致的分割,以便于提取出其所包含的每个服装区域的局部图片,由于引入了人体关节点信息标注,提取出的每个服装区域的局部图片不再受限于不完整或拍摄方向等因素,再对每个服装区域的局部图片做处理后形成新图片送入分类器中进行颜色识别,缩小了颜色识别范围,提高了颜色识别效率;相比其他服装颜色识别方法,本发明在相同的数据集基础上,能够达到更高的识别成功率和识别准确率。
(2)受图片中只有半身像、遮挡等复杂的情况干扰小,例如,对于半身像的情况,本发明能够做到几乎完全判断准确。
(3)基于JPPNet网络创建服装区域提取模型,服装区域提取速度快,整体服装颜色识别时间短。
(4)通过对图像进行随机参数变换处理,在少量人像图片的基础上,生成大量样本图片,样本集生成效率高。
(5)生成的局部图片的原背景区域被变换成统一背景,(例如纯白色背景),避免背景干扰的问题。
应当理解,前述构思以及在下面更加详细地描述的额外构思的所有组合只要在这样的构思不相互矛盾的情况下都可以被视为本公开的发明主题的一部分。另外,所要求保护的主题 的所有组合都被视为本公开的发明主题的一部分。
结合附图从下面的描述中可以更加全面地理解本发明教导的前述和其他方面、实施例和特征。本发明的其他附加方面例如示例性实施方式的特征和/或有益效果将在下面的描述中显见,或通过根据本发明教导的具体实施方式的实践中得知。
附图说明
附图不意在按比例绘制。在附图中,在各个图中示出的每个相同或近似相同的组成部分可以用相同的标号表示。为了清晰起见,在每个图中,并非每个组成部分均被标记。现在,将通过例子并参考附图来描述本发明的各个方面的实施例,其中:
图1是本发明的基于语义分割的服装颜色识别方法的流程图。
图2是本发明的服装颜色识别步骤图。
图3是本发明的具体识别场景示意图。
具体实施方式
为了更了解本发明的技术内容,特举具体实施例并配合所附图式说明如下。
在本公开中参照附图来描述本发明的各方面,附图中示出了许多说明的实施例。本公开的实施例不必定义在包括本发明的所有方面。应当理解,上面介绍的多种构思和实施例,以及下面更加详细地描述的那些构思和实施方式可以以很多方式中任意一种来实施,这是因为本发明所公开的构思和实施例并不限于任何实施方式。另外,本发明公开的一些方面可以单独使用,或者与本发明公开的其他方面的任何适当组合来使用。
具体实施例一
结合图1,本发明提出一种基于语义分割的服装颜色识别方法,所述服装颜色识别方法包括以下步骤:
S1:采集一定量不同场景下不同参数的人像图片,对采集的人像图片进行标注,标注内容包括服装区域分割信息标注和人体关节点信息标注;对标注好的人像图片进行随机参数变换处理,生成初始样本集。
S2:基于JPPNet网络(Joint Body Parsing&Pose Estimation Network)创建服装区域提取模型,所述服装区域提取模型用于结合人体关节点信息标注和服装区域分割信息标注对人像图片上的服装区域进行提取;将初始样本集中的人像图片导入服装区域提取模型,从人像图片中提取出各自所包含的每个服装区域的局部图片。所述JPPNet网络是现有技术中经常采用的一种基于Tensorflow的人体分析和姿态估计的深度学习方法。
S3:对提取出的每个服装区域的局部图片进行背景变换和颜色分类标注;对标注好的局部图片进行尺寸统一和随机参数变换处理,生成训练样本集。
S4:基于分类器创建服装颜色识别模型,将训练样本集导入服装颜色识别模型以对服装颜色识别模型进行训练。
S5:实时采集包含有服装信息的人像图片,采用服装区域提取模型和服装颜色识别模型对人像图片中的一个或多个服装区域的服装颜色进行识别。
下面结合具体例子对前述步骤进行详细阐述。
步骤一、生成初始样本集
首先采集姿态、光照、场景、角度各不相同的人像图片,包括一些全身的、半身的图片,再对图片进行两个方面的标注。标注包括两个方面:第一个方面是对头部、上衣、下衣、四肢等区域的分割标注;第二个方面是对人体的手腕、肘部、肩部、头部、胸腔、膝关节、脚踝等15个关节点的标注。
前述标注数据是步骤二中采用JPPNet网络从人像图片中提取出各自所包含的每个服装区域的局部图片的数据基础。该步骤可以通过标注的方式将人体的外套、上衣、连衣装等等不同的服装进行区分。由于采集到的人像图像的数量有限,而导入服装颜色识别模型的训练样本的数据越大,种类越多,最终训练生成的服装颜色识别模型的鲁棒性和识别率越高。为了尽可能地增大训练样本的数量,本发明提出,对标注好的人像图片进行随机参数变换处理(如对图片采用随机裁剪、旋转、翻转、颜色变换等处理),生成初始样本集。
步骤二、从人像图片中提取出各自所包含的每个服装区域的局部图片
基于JPPNet网络创建服装区域提取模型,将初始样本集中的人像图片导入服装区域提取模型中进行训练,所述服装区域提取模型用于结合人体关节点信息标注和服装区域分割信息标注对人像图片上的服装区域进行提取,最终从人像图片中提取出各自所包含的每个服装区域的局部图片。在本发明中,最终提取出的服装区域的类型由用户根据实际需求确定,例如,只提取分别包含上衣、下衣的局部图片等。
所述服装区域提取模型采用的网络为JPPNet,通过采用人体关节点对人体不同区域的分割做辅助。得益于这项辅助,相比于常见的语义分割模型来说,服装区域提取模型可以极大地减少误分割的情况,由此可以显著地增强整个模型的泛化能力。
步骤三、生成训练样本集
通过语义分割模型提取出人的上衣和下衣之后,我们选择将上衣和下衣分别提取出来形成新图片,针对这些新图片采用如双线性插值等方法来统一尺寸,继而作为训练数据送入到下一阶段的分类器中进行分类。
在形成局部图片的过程中,考虑到背景的颜色会影响颜色分类器的效果,本发明提出将原来为背景的区域用纯白色背景进行替换,以此来避免背景干扰的问题。
同样的,为了提高服装颜色识别模型的鲁棒性和识别率,本发明提出,对标注好的局部图片进行随机参数变换处理(如对图片采用随机裁剪、旋转、翻转、颜色变换等处理),尽可能地增大训练样本的数量,生成训练样本集。
步骤四、创建并训练服装颜色识别模型
将步骤三生成的训练样本集导入分类器中进行颜色识别,以完成对服装颜色识别模型的训练。在训练过程中,可以将训练样本集按设定比例分别划分成训练集和测试集,采用训练集对服装颜色识别模型进行训练,之后再采用测试集对服装颜色识别模型进行验证(例如判断识别成功率和识别准确率是否满足预设要求),验证通过则训练完成,否则,调整模型参数对模型进行重新训练,直至验证通过。
具体实施例二
结合图2,在前述服装颜色识别模型训练成功的基础上,步骤S5中,所述采用服装区域 提取模型和服装颜色识别模型对人像图片中的一个或多个服装区域的服装颜色进行识别包括以下步骤:
S51:实时采集包含有服装信息的人像图片,将人像图片导入服装区域提取模型以提取出人像图片中包含的每个服装区域的局部图片。
S52:对提取出的每个服装区域的局部图片进行尺寸统一和背景变换处理,将处理后的每个服装区域的局部图片导入服装颜色识别模型以识别出对应的服装颜色。
如图3(a)中的人像图片,当采用本发明所提及的服装颜色识别方法时,首先分割提取出两张局部图片,分别只包含上衣服装和下衣服装,再针对生成的只包含有上衣服装或下衣服装的局部图片进行背景统一、尺寸统一等处理,得到如图3(b)和图3(c)的两张图片,最后基于图3(b)和图3(c)进行服装颜色识别。
本发明通过大量样本分析出人体关节点和服装区域存在的对应关系,在实际服装识别过程中,采用人体关节点精准提取出每个服装区域。例如,在缺少完整下装图片的半身照中,结合四肢、膝关节、髋部等即可以从整个人像图片中提取出不完整的下装图片,继而对提取出的不完整的下装图片进行背景处理和颜色识别。
经实践证明,基于直接分类、图卷积神经网络和基于注意力机制的上下衣颜色分类的方案,在监控场景的数据上,往往只能达到60%-70%的准确率,本发明在相同的数据集上经过测试,能够达到85%的准确率,尤其是对于图片中只有半身像、遮挡等复杂的情况,以往的方法几乎会将遮挡物的颜色判定为衣服的颜色,而本发明所提出的服装颜色识别方法对于这类情况,能够将绝大多数的情况判断正确,对于半身像的情况,甚至能够做到几乎完全判断准确。
应用场景一
在人员卡口场景下,由于会出现时间、光照、角度、遮挡、姿态等等复杂多变的情况,导致所获取到的人的图片***。而本发明所提及的服装颜色识别方法能够有效地处理前述复杂的情况,结合图片中的人体关节点信息精准提取出不同的服装区域,例如上装区域和下装区域,再从上装区域中提取出上衣,从下装区域中提取出下衣,更换上衣和下衣的背景颜色,更加突出上衣或下衣部分,最后针对上衣图片和下衣图片进行颜色识别,一方面可以避免因为拍摄角度或遮挡的问题导致的半身照、头肩照等人身像不全的情况的衣服颜色判别问题,从而有效地避免了误判情况,另一方面提高了颜色识别速度。
应用场景二
拍摄的照片中经常存在非常严重的遮挡问题,例如常见的背包遮挡、手持大件物品的遮挡、拥挤人群的遮挡,其中拥挤人群的遮挡在语义分割模型中极容易出现误分割的情况。本发明提出采用人体关节点的辅助分割方法能够在一定程度上避免这种情况,当人群的上衣、下衣处于分离的状态时,我们可以采用从空白区域切割的方案来避免误分割的情况。
具体实施例三
基于前述服装颜色识别方法,本发明提出一种基于语义分割的服装颜色识别装置,所述服装颜色识别装置包括:
(1)基于JPPNet网络创建的服装区域提取模型,用于结合人体关节点信息标注和服装 区域分割信息标注对人像图片上的服装区域进行提取。
(2)基于分类器创建的服装颜色识别模型,用于对导入的每个服装区域的局部图片中的服装颜色进行识别。
(3)人像图片采集模块,用于采集不同场景下不同参数的人像图片。
(4)样本集生成模块,用于对导入对图片进行随机参数变换处理,生成对应的训练图片样本集。
(5)图像预处理模块,用于对导入的图片进行背景变换、尺寸统一处理。
具体实施例四
基于前述服装颜色识别方法,本发明提出一种基于语义分割的服装颜色识别***,所述服装颜色识别***包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序。所述处理器执行所述计算机程序时实现如前述的服装颜色识别方法的步骤。
本领域普通技术人员可以理解:实施上述方法实施例的全部或部分步骤可以通过程序指令相关的硬件完成,前述的程序可以存储与一计算机可读取存储介质中,该程序在执行时,执行包括上述方法实施例的步骤;而前述的存储介质包括:ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。
虽然本发明已以较佳实施例揭露如上,然其并非用以限定本发明。本发明所属技术领域中具有通常知识者,在不脱离本发明的精神和范围内,当可作各种的更动与润饰。因此,本发明的保护范围当视权利要求书所界定者为准。

Claims (9)

  1. 一种基于语义分割的服装颜色识别方法,其特征在于,所述服装颜色识别方法包括以下步骤:
    S1:采集一定量不同场景下不同参数的人像图片,对采集的人像图片进行标注,标注内容包括服装区域分割信息标注和人体关节点信息标注;对标注好的人像图片进行随机参数变换处理,生成初始样本集;
    S2:基于JPPNet网络创建服装区域提取模型,所述服装区域提取模型用于结合人体关节点信息标注和服装区域分割信息标注对人像图片上的服装区域进行提取;将初始样本集中的人像图片导入服装区域提取模型,从人像图片中提取出各自所包含的每个服装区域的局部图片;
    S3:对提取出的每个服装区域的局部图片进行背景变换和颜色分类标注;对标注好的局部图片进行尺寸统一和随机参数变换处理,生成训练样本集;
    S4:基于分类器创建服装颜色识别模型,将训练样本集导入服装颜色识别模型以对服装颜色识别模型进行训练;
    S5:实时采集包含有服装信息的人像图片,采用服装区域提取模型和服装颜色识别模型对人像图片中的一个或多个服装区域的服装颜色进行识别。
  2. 根据权利要求1所述的基于语义分割的服装颜色识别方法,其特征在于,步骤S1中,所述人像图片的参数包括拍摄参数和人体姿态参数;
    所述拍摄参数包括光照条件、拍摄场景、拍摄角度、拍摄距离;
    所述人体姿态参数包括人体姿态、全身特写、半身特写。
  3. 根据权利要求1所述的基于语义分割的服装颜色识别方法,其特征在于,步骤S1中,所述服装区域分割信息标注包括头部、上衣、下衣、四肢、脚部区域的信息标注;
    所述人体关节点信息标注包括人体的手腕、肘部、肩部、头部、胸腔、膝关节、脚踝关节点的信息标注。
  4. 根据权利要求1所述的基于语义分割的服装颜色识别方法,其特征在于,所述随机参数变换处理是指,对图片进行随机裁剪、旋转、翻转、颜色变换处理。
  5. 根据权利要求1所述的基于语义分割的服装颜色识别方法,其特征在于,步骤S3中,所述对提取出的每个服装区域的局部图片进行背景变换是指,将提取出的每个服装区域的局部图片中的背景区域统一成纯白色背景。
  6. 根据权利要求1所述的基于语义分割的服装颜色识别方法,其特征在于,步骤S3中,采用双线性插值的方法将标注好的局部图片统一到相同的尺寸。
  7. 根据权利要求1所述的基于语义分割的服装颜色识别方法,其特征在于,步骤S5中,所述采用服装区域提取模型和服装颜色识别模型对人像图片中的一个或多个服装区域的服装颜色进行识别包括以下步骤:
    S51:实时采集包含有服装信息的人像图片,将人像图片导入服装区域提取模型以提取出人像图片中包含的每个服装区域的局部图片;
    S52:对提取出的每个服装区域的局部图片进行尺寸统一和背景变换处理,将处理后的每个服装区域的局部图片导入服装颜色识别模型以识别出对应的服装颜色。
  8. 一种基于语义分割的服装颜色识别装置,其特征在于,所述服装颜色识别装置包括:
    基于JPPNet网络创建的服装区域提取模型,用于结合人体关节点信息标注和服装区域分割信息标注对人像图片上的服装区域进行提取;
    基于分类器创建的服装颜色识别模型,用于对导入的每个服装区域的局部图片中的服装颜色进行识别;
    人像图片采集模块,用于采集不同场景下不同参数的人像图片;
    样本集生成模块,用于对导入对图片进行随机参数变换处理,生成对应的训练图片样本集;
    图像预处理模块,用于对导入的图片进行背景变换、尺寸统一处理。
  9. 一种基于语义分割的服装颜色识别***,其特征在于,所述服装颜色识别***包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序;
    所述处理器执行所述计算机程序时实现如权利要求1-7任意一项中所述的服装颜色识别方法的步骤。
PCT/CN2020/121515 2020-02-18 2020-10-16 基于语义分割的服装颜色识别方法、装置和*** WO2021164283A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010098415.2A CN111325806A (zh) 2020-02-18 2020-02-18 基于语义分割的服装颜色识别方法、装置和***
CN202010098415.2 2020-02-18

Publications (1)

Publication Number Publication Date
WO2021164283A1 true WO2021164283A1 (zh) 2021-08-26

Family

ID=71172768

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/121515 WO2021164283A1 (zh) 2020-02-18 2020-10-16 基于语义分割的服装颜色识别方法、装置和***

Country Status (2)

Country Link
CN (1) CN111325806A (zh)
WO (1) WO2021164283A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113848736A (zh) * 2021-09-13 2021-12-28 青岛海尔科技有限公司 基于智能衣柜的衣物信息处理方法及设备
CN113919998A (zh) * 2021-10-14 2022-01-11 天翼数字生活科技有限公司 一种基于语义和姿态图引导的图片匿名化方法
CN113963374A (zh) * 2021-10-19 2022-01-21 中国石油大学(华东) 基于多层次特征与身份信息辅助的行人属性识别方法
CN114048489A (zh) * 2021-09-01 2022-02-15 广东智媒云图科技股份有限公司 一种基于隐私保护的人体属性数据处理方法及装置

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111325806A (zh) * 2020-02-18 2020-06-23 苏州科达科技股份有限公司 基于语义分割的服装颜色识别方法、装置和***
CN112419249B (zh) * 2020-11-12 2022-09-06 厦门市美亚柏科信息股份有限公司 一种特殊服饰图片转化方法、终端设备及存储介质
CN112528855B (zh) * 2020-12-11 2021-09-03 南方电网电力科技股份有限公司 一种电力作业着装规范识别方法和装置
CN112990012A (zh) * 2021-03-15 2021-06-18 深圳喜为智慧科技有限公司 一种遮挡条件下的工装颜色识别方法及***
CN113516062B (zh) * 2021-06-24 2021-11-26 深圳开思信息技术有限公司 用于汽修门店的客户识别方法和***
CN114201681A (zh) * 2021-12-13 2022-03-18 支付宝(杭州)信息技术有限公司 用于推荐衣物的方法及装置
CN114093011B (zh) * 2022-01-12 2022-05-06 北京新氧科技有限公司 头发分类方法、装置、设备及存储介质
CN117409208B (zh) * 2023-12-14 2024-03-08 武汉纺织大学 一种实时服装图像语义分割方法及***

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106227827A (zh) * 2016-07-25 2016-12-14 华南师范大学 服装图像前景颜色特征提取方法及服装检索方法和***
CN107766861A (zh) * 2017-11-14 2018-03-06 深圳码隆科技有限公司 人物图像服装颜色识别方法、装置及电子设备
CN108229288A (zh) * 2017-06-23 2018-06-29 北京市商汤科技开发有限公司 神经网络训练及衣服颜色检测方法、装置、存储介质、电子设备
CN109325952A (zh) * 2018-09-17 2019-02-12 上海宝尊电子商务有限公司 基于深度学习的时尚服装图像分割方法
CN110263605A (zh) * 2018-07-18 2019-09-20 桂林远望智能通信科技有限公司 基于二维人体姿态估计的行人服饰颜色识别方法及装置
CN111325806A (zh) * 2020-02-18 2020-06-23 苏州科达科技股份有限公司 基于语义分割的服装颜色识别方法、装置和***

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106227827A (zh) * 2016-07-25 2016-12-14 华南师范大学 服装图像前景颜色特征提取方法及服装检索方法和***
CN108229288A (zh) * 2017-06-23 2018-06-29 北京市商汤科技开发有限公司 神经网络训练及衣服颜色检测方法、装置、存储介质、电子设备
CN107766861A (zh) * 2017-11-14 2018-03-06 深圳码隆科技有限公司 人物图像服装颜色识别方法、装置及电子设备
CN110263605A (zh) * 2018-07-18 2019-09-20 桂林远望智能通信科技有限公司 基于二维人体姿态估计的行人服饰颜色识别方法及装置
CN109325952A (zh) * 2018-09-17 2019-02-12 上海宝尊电子商务有限公司 基于深度学习的时尚服装图像分割方法
CN111325806A (zh) * 2020-02-18 2020-06-23 苏州科达科技股份有限公司 基于语义分割的服装颜色识别方法、装置和***

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
LIANG, XIAODAN ET AL.: "Look into Person:Joint Body Parsing & Pose Estimation Network and a New Benchmark", IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, vol. 41, no. 4, 30 April 2019 (2019-04-30), XP011712946, DOI: 10.1109/TPAMI.2018.2820063 *
XULUHONGSHANG: "(non-official translation: FashionAI", CSDN BLOG-HTTPS://BLOG.CSDN.NET/XULUHONGSHANG/ARTICLE/DETAILS/80616331, 7 June 2018 (2018-06-07) *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114048489A (zh) * 2021-09-01 2022-02-15 广东智媒云图科技股份有限公司 一种基于隐私保护的人体属性数据处理方法及装置
CN113848736A (zh) * 2021-09-13 2021-12-28 青岛海尔科技有限公司 基于智能衣柜的衣物信息处理方法及设备
CN113919998A (zh) * 2021-10-14 2022-01-11 天翼数字生活科技有限公司 一种基于语义和姿态图引导的图片匿名化方法
CN113919998B (zh) * 2021-10-14 2024-05-14 天翼数字生活科技有限公司 一种基于语义和姿态图引导的图片匿名化方法
CN113963374A (zh) * 2021-10-19 2022-01-21 中国石油大学(华东) 基于多层次特征与身份信息辅助的行人属性识别方法

Also Published As

Publication number Publication date
CN111325806A (zh) 2020-06-23

Similar Documents

Publication Publication Date Title
WO2021164283A1 (zh) 基于语义分割的服装颜色识别方法、装置和***
Wang et al. Deep 3D human pose estimation: A review
US10963041B2 (en) Gesture recognition using multi-sensory data
Wetzler et al. Rule of thumb: Deep derotation for improved fingertip detection
Simon et al. Hand keypoint detection in single images using multiview bootstrapping
Tang et al. Facial landmark detection by semi-supervised deep learning
Petersen et al. Real-time modeling and tracking manual workflows from first-person vision
WO2021082692A1 (zh) 一种行人图片标注方法、装置、存储介质和智能设备
CN112101208A (zh) 高龄老人特征串联融合手势识别方法及装置
CN106952312B (zh) 一种基于线特征描述的无标识增强现实注册方法
JP2018147313A (ja) オブジェクト姿勢推定方法、プログラムおよび装置
Das et al. Action recognition based on a mixture of RGB and depth based skeleton
JP2023511243A (ja) 画像処理方法と装置、電子デバイス、及び記録媒体
CN109166172B (zh) 服装模型的构建方法、装置、服务器和存储介质
Nguyen et al. Combined YOLOv5 and HRNet for high accuracy 2D keypoint and human pose estimation
Vasconcelos et al. Methods to automatically build point distribution models for objects like hand palms and faces represented in images
CN102496174A (zh) 一种用于安防监控的人脸素描索引生成方法
CN115830635A (zh) 一种基于关键点检测和目标识别的pvc手套识别方法
Bourbakis et al. Skin-based face detection-extraction and recognition of facial expressions
Aonty et al. Multi-Person Pose Estimation Using Group-Based Convolutional Neural Network Model
Li et al. RaP-Net: A region-wise and point-wise weighting network to extract robust features for indoor localization
Ye et al. Human motion analysis based on extraction of skeleton and dynamic time warping algorithm using RGBD camera
Matsumoto et al. Automatic human pose annotation for loose-fitting clothes
Chen et al. Zeropose: Cad-model-based zero-shot pose estimation
Matsumoto et al. Human pose annotation using a motion capture system for loose-fitting clothes

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20920283

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20920283

Country of ref document: EP

Kind code of ref document: A1