CN109849576A - A kind of method of reference gray level figure auxiliary drawing - Google Patents

A kind of method of reference gray level figure auxiliary drawing Download PDF

Info

Publication number
CN109849576A
CN109849576A CN201910149284.3A CN201910149284A CN109849576A CN 109849576 A CN109849576 A CN 109849576A CN 201910149284 A CN201910149284 A CN 201910149284A CN 109849576 A CN109849576 A CN 109849576A
Authority
CN
China
Prior art keywords
image
semantic segmentation
grayscale image
grayscale
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910149284.3A
Other languages
Chinese (zh)
Other versions
CN109849576B (en
Inventor
孙凌云
陈鹏
向为
陈培
高暐玥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN201910149284.3A priority Critical patent/CN109849576B/en
Publication of CN109849576A publication Critical patent/CN109849576A/en
Application granted granted Critical
Publication of CN109849576B publication Critical patent/CN109849576B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a kind of methods of reference gray level figure auxiliary drawing, including the training stage: obtaining training image, and grayscale image is converted by training image, training image is subjected to semantic segmentation mark simultaneously and obtains semantic segmentation image, using semantic segmentation image as input, using corresponding grayscale image as exporting, the deep learning model that can be realized image and be converted into image is trained, it determines model parameter, obtains grayscale image and generate model;Application stage: semantic segmentation figure is drawn, and the semantic segmentation figure of drafting is input to grayscale image and is generated in model, it is computed and generates multiple grayscale images, according to the grayscale image that selection can refer to from grayscale image, texture rendering is carried out to semantic segmentation figure, improves paint.The drawing householder method will be converted into generating grayscale image by semantic segmentation figure, provide light and shade in the content detail on drawing, texture reference, support author to find inspiration from grayscale image, reduce time of the person of drawing a picture in design works details.

Description

A kind of method of reference gray level figure auxiliary drawing
Technical field
The invention belongs to field of auxiliary of painting, and in particular to a kind of method of reference gray level figure auxiliary drawing.
Background technique
With the development of computer industry, computer experienced during creative design execution, imitation, assistant turn Become.In painting creation field, the tool that computer provides multiple functions from one is gradually transformed into the role of drawing auxiliary.Technology Development allowed computer by learning the works of certain artist, capture its style, imitated and carry out drawing wound Make.User and collaborative computer creation also may be implemented in the assistant role that computer is played the part of, and mentions during creation for user For miscellaneous support, allow layman that can also carry out high-caliber painting creation.In the field of auxiliary drawing, calculate Machine auxiliary still has great expansible space.
Application of the existing artificial intelligence technology generation technique in drawing can simply be divided into works and generate and assist letter Breath generates two aspects.Works generation directly generates final as a result, being as a result usually fixed constant.Auxiliary information generates Auxiliary information is provided during drawing a picture, helps the quick result obtained to the end of the person of drawing a picture.
The first draft that the application that works generate is drawn based on author, author first design simple works first draft, such as rough draft, line Original text, layout etc., artificial intelligence technology are done directly next generation work, directly generate simple works first draft final Works.Such method is although very convenient, but gives model sizable free space, and generation may be selected in model As a result type is excessive, causes the image effect generated excessively poor, while depending on the content of first draft unduly and draw a picture level, only pole The least a portion of first draft content for meeting model generation can obtain good result.Although current deep learning model has been able to Learn the style of paintings out, and final works can be obtained by way of Style Transfer, effect is also preferable, but such Works be all often it is abstract, fine distinction will not influence the experience of whole picture paintings.And on non-abstract visual pattern, People is very sensitive to tiny distortion and entanglement, and the works that artificial intelligence technology generates often have the mistake for violating common sense Difference.It is limited to current generation technique, in the result that non-abstract image domains directly generate artificial intelligence technology as works Still there are many difficulties to need to solve.
The application that auxiliary information generates will generate result and be supplied to author during drawing a picture as reference, such reference Mode type is more, such as provides the prediction lines of next record when setting-out original text, provides when drawing layout and generates effect by reference. The reference single effect that these modes provide is typically based on lines and whole effect, and major part absolutely does not give particulars in content Guidance.And the person of drawing a picture will enrich the details of paintings during common draw a picture, and need to obtain from daily life and memory The inspiration of details, it is also very desirable to the guidance and reference of this respect.The related application also lacked in artificial intelligence technology at present is supported Reference in content detail is provided.
Summary of the invention
The object of the present invention is to provide a kind of method of reference gray level figure auxiliary drawing, which will pass through language Adopted segmentation figure is converted into generating grayscale image, provides light and shade in the content detail on drawing, texture reference, supports author from gray scale Inspiration is found in figure, reduces time of the person of drawing a picture in design works details, and then can efficiently and conveniently assist painting.
For achieving the above object, the present invention the following technical schemes are provided:
A kind of method of the method reference gray level figure auxiliary drawing of reference gray level figure auxiliary drawing, comprising the following steps:
Training stage: training image is obtained, and converts grayscale image for training image, determines semantic type and color-language Adopted corresponding relationship carries out semantic segmentation to training image according to color-semanteme corresponding relationship, semantic segmentation image is obtained, with language Adopted segmented image is as input, using corresponding grayscale image as exporting, to the deep learning that can be realized image and be converted into image Model carries out the training that iterates, the transformation model of acquisition semantic segmentation figure to grayscale image;
Application stage: according to color-semanteme corresponding relationship and semantic type, being intended to draw semantic segmentation figure according to drawing, And the semantic segmentation figure of drafting is input to semantic segmentation figure into the transformation model of grayscale image, it is computed and generates multiple gray scales Figure carries out texture rendering to semantic segmentation figure, enriches the interior of paint according to the grayscale image for selecting reference from grayscale image Hold.
It in the present invention, assists painting using artificial intelligence technology, be arrived using deep learning model training semantic segmentation figure Grayscale image obtains the model that semantic segmentation figure is converted to grayscale image.It is generated using trained model and meets the person's of drawing a picture intention Grayscale image provide reference, can iterate generating process, support the person of drawing a picture constantly from model generate grayscale image in seek Inspiration is looked for, reduces time of the person of drawing a picture in design works details, it is efficient, convenient, novel to have the characteristics that.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below There is attached drawing needed in technical description to do simply to introduce, it should be apparent that, the accompanying drawings in the following description is only this Some embodiments of invention for those of ordinary skill in the art, can be with root under the premise of not making the creative labor Other accompanying drawings are obtained according to these attached drawings.
Fig. 1 is the flow diagram of the method for reference gray level figure auxiliary drawing of the present invention;
Fig. 2 is the example of semantic segmentation figure;
Fig. 3 is the example of grayscale image.
Specific embodiment
To make the objectives, technical solutions, and advantages of the present invention more comprehensible, with reference to the accompanying drawings and embodiments to this Invention is described in further detail.It should be appreciated that the specific embodiments described herein are only used to explain the present invention, And the scope of protection of the present invention is not limited.
In order to promote the effect and efficiency of drawing, as shown in Figure 1, the present invention provides a kind of reference gray level figure auxiliary to paint Method, including following procedure:
S101 obtains training image, converts grayscale image for training image by image gray processing algorithm.
In the present embodiment, training image can be from web search, artificial shooting or certain databases.General training image Generally there is similar semantic composition, such as be all landscape painting, streetscape and high-rise building etc..In order to guarantee enough data instructions Practice deep learning model, the training image of acquisition is not less than 5000.
Grayscale image is a kind of image that color and part details are eliminated on the basis of original image, but remains content The information such as texture, light and shade in details, details richness is lower than original image but is above sketch.Grayscale image is as a kind of image class There are two advantages for type tool, first is that can be generated by deep learning model, second is that having detailed information abundant, third is that can lead to It crosses the execution algorithm on training image to obtain, features above determines that grayscale image is suitable as reference picture.
In the present embodiment, grayscale image is converted for training image (namely RGB color figure) using formula (1);
Gray (i, j)=0.299*R (i, j)+0.587*G (i, j)+0.144*B (i, j) (1)
Wherein, R (i, j) is R channel image, and G (i, j) is G channel image, and B (i, j) is channel B image, the gray scale of conversion Figure is as shown in Figure 3.
S102 determines semantic type and color-semanteme corresponding relationship, according to color-semanteme corresponding relationship to training image Semantic segmentation is carried out, semantic segmentation image is obtained.
Wherein, carrying out semantic segmentation to training image according to color-semanteme corresponding relationship includes:
Training image is split according to color corresponding to the object in image, to similar or identical in image Object is labeled with same color, to obtain semantic segmentation image.This process can be it is artificial carry out semantic segmentation, can be with It is to carry out semantic segmentation using the algorithm of setting, the semantic segmentation image after segmentation can be as shown in Figure 2.
In the present embodiment, classify to the object in training image, be always divided into several classes, each classification represents identical Or similar object, its classification is indicated with fixed color.Wherein, similar to object refer to belong to of a sort object, example Such as: birch, cypress all belong to trees this kind, can represent trees with green.In this manner it is possible to convert training image to only The semantic segmentation image indicated with color in semantic segmentation image, indicates the area with identical semantic information with identical color Domain.
S103, using corresponding grayscale image as output, is converted using semantic segmentation image as input to can be realized image It is trained, iterates until model is restrained, acquisition semantic segmentation figure to grayscale image turns for the deep learning model of image Change model.
In the present embodiment, selects pix2pixHD model as deep learning model, end is carried out to pix2pixHD model and is arrived The training at end, the transformation model of acquisition semantic segmentation figure to grayscale image.Pix2pixHD model is a kind of condition generation confrontation net Network is mainly used in image interpretation field.For task proposed by the present invention, the target of generator G is that semantic segmentation figure is raw At corresponding grayscale image, and arbiter D is then: 1) distinguishing the grayscale image of true grayscale image and generation;2) judge grayscale image and language Whether the mapping between adopted segmentation figure is correct.Pix2pixHD model is a kind of image interpretation model, is converted with powerful picture Ability can learn from training picture to information such as texture, the light and shade rules of picture, it can convert semantic segmentation image For grayscale image, thus abundant detailed information.
Specifically, the input of pix2pixHD model is that the one-hot vector of label figure indicates.Pix2pixHD model Generator is made of two sub- generators, G1For global generator, G2For local enhancement generator, result figure is generated for expanding Both the image size of picture is made of one group of convolutional network, residual error network and transposition convolutional network.In view of the gray scale of generation Scheme figure for reference only to use, do not need excessively high display degree, therefore global generator G is used only1As generator, setting Input and output size is all 256*512.Training process carries out on GPU, using Adam gradient descent method undated parameter, learning rate Lr=0.0002, β1=0.5, β2=0.999.In 50 times initial iterative process, learning rate is remained unchanged, and is changed with subsequent The increase linear attenuation of generation number.In 200 the number of iterations, model converges to stabilization.
S104 is intended to draw semantic segmentation figure according to drawing according to color-semanteme corresponding relationship and semantic type.
The usual mode of drawing is that first have a drawing to be intended to, and author has conceived the structure of entire paintings, then is drawn accurate Content.It is relatively easy to conceive paintings structure, but be different space structure to mean that identical semantic chunk also can be different Grain details, author can spend in the trial to details such as the specific texture of a certain partial content, light and shades a large amount of time and delete On changing.Purpose is that the grayscale image obtained using the training stage generates the angle of model offer grayscale image to assist painting, first is that energy The light and shade scheme of integral layout is provided, second is that the texture inspiration of content detail can be provided, third is that can be with by fuzzy grayscale image It allows the person of drawing a picture to see the substantially effect of paintings rapidly to make corresponding modification, greatly shortens the time required to author draws a picture.
Author expresses drawing by way of drawing semantic segmentation figure and is intended to, and indicates to be laid out using semantic segmentation figure.It is semantic Segmentation figure can indicate whole spatial structural form by the semantic information of the arrangement of color lump, the shape of color lump and representative.
The semantic segmentation figure of drafting is input to semantic segmentation figure into the transformation model of grayscale image, is computed life by S105 Meet the grayscale image of semantic segmentation content at multiple.
Specifically, multiple grayscale images of generation meet the layout of semantic segmentation figure on space structure, in detail content Since the randomness that model generates has a little difference.The detail content of more Suresh Kumar can be provided by generating multiple grayscale images.
S106 chooses suitable grayscale image as the reference in detail content, to semantic segmentation figure from multiple grayscale images Detail content carry out texture rendering, enrich paint content.
Specifically, the grayscale image that can be referred to is selected according to interest, includes light and shade, texture information in the grayscale image, presses Light and shade, texture information according to grayscale image presentation, are filled semantic segmentation figure perfect.
Grayscale image has many characteristics, such as texture in details, light and shade, compares semantic segmentation figure, has details more abundant letter Breath, these information are that model learns from training data, have certain reasonability, indicate that computer learning is arrived herein Due texture, light and shade information.According to these grayscale images, author can therefrom find inspiration, enrich each semanteme of paintings The details of block.
In order to preferably improve paint, the drawing householder method further include:
S107 repeats S104~S106, and each iteration further enriches the content of drawing, iterates, until improving whole Open paintings.
Specifically, the paint after abundant details is repainted or is modified semantic segmentation image and be input to semantic segmentation Figure is computed into the transformation model of grayscale image and generates multiple grayscale images, according to the gray scale that selection can refer to from grayscale image Figure, i.e., repeatedly S104~S106 carry out texture rendering to perfect paint, further improve paint.Pass through The renolation paint that constantly iterates can obtain final paint.
Above-mentioned drawing householder method generates grayscale image by the way that semantic segmentation figure to be converted into, and provides the content detail on drawing On light and shade, texture reference, compensate for drawing auxiliary give particulars auxiliary on deficiency.Can iterate generating process, Support the person of drawing a picture constantly to find inspiration from the grayscale image that model generates, reduce the person of drawing a picture conceive in works details when Between, it is efficient, convenient, novel to have the characteristics that.
Technical solution of the present invention and beneficial effect is described in detail in above-described specific embodiment, Ying Li Solution is not intended to restrict the invention the foregoing is merely presently most preferred embodiment of the invention, all in principle model of the invention Interior done any modification, supplementary, and equivalent replacement etc. are enclosed, should all be included in the protection scope of the present invention.

Claims (7)

1. a kind of method of reference gray level figure auxiliary drawing, comprising the following steps:
Training stage: training image is obtained, and converts grayscale image for training image, determines that semantic type and color-semanteme are right It should be related to, semantic segmentation is carried out to training image according to color-semanteme corresponding relationship, semantic segmentation image is obtained, with semanteme point Image is cut as input, using corresponding grayscale image as exporting, to the deep learning model that can be realized image and be converted into image Carry out the training that iterates, the transformation model of acquisition semantic segmentation figure to grayscale image;
Application stage: according to color-semanteme corresponding relationship and semantic type, it is intended to draw semantic segmentation figure according to drawing, and will The semantic segmentation figure of drafting is input to semantic segmentation figure into the transformation model of grayscale image, is computed and generates multiple grayscale images, root According to the grayscale image for selecting reference from grayscale image, texture rendering is carried out to semantic segmentation figure, enriches the content of paint.
2. the method for reference gray level figure auxiliary drawing as described in claim 1, which is characterized in that the training image of acquisition is not low In 5000.
3. the method for reference gray level figure auxiliary drawing as described in claim 1, which is characterized in that will be trained using formula (1) Image is converted into grayscale image;
Gray (i, j)=0.299*R (i, j)+0.587*G (i, j)+0.144*B (i, j) (1)
Wherein, R (i, j) is R channel image, and G (i, j) is G channel image, and B (i, j) is channel B image.
4. the method for reference gray level figure auxiliary drawing as described in claim 1, which is characterized in that corresponding according to color-semanteme Relationship carries out semantic segmentation to training image
Training image is split according to color corresponding to the object in image, to the similar or identical object in image It is labeled with same color, to obtain semantic segmentation image.
5. the method for reference gray level figure auxiliary drawing as described in claim 1, which is characterized in that selection pix2pixHD model As deep learning model, pix2pixHD model is trained end to end, obtains the conversion of semantic segmentation figure to grayscale image Model.
6. the method for reference gray level figure auxiliary drawing as described in claim 1, which is characterized in that can be joined according to interest selection The grayscale image examined includes light and shade, texture information, the light and shade presented according to grayscale image, texture information, to semanteme in the grayscale image Segmentation figure is filled perfect.
7. the method for reference gray level figure auxiliary drawing as described in any one of claims 1 to 6, which is characterized in that the drawing Householder method further include:
Paint after abundant details is repainted or modified semantic segmentation image and is input to semantic segmentation figure to grayscale image Transformation model in, be computed generate multiple grayscale images, according to selected from grayscale image reference grayscale image, to perfect Paint carries out texture rendering, further improves paint.
CN201910149284.3A 2019-02-28 2019-02-28 Method for assisting drawing by referring to gray level diagram Active CN109849576B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910149284.3A CN109849576B (en) 2019-02-28 2019-02-28 Method for assisting drawing by referring to gray level diagram

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910149284.3A CN109849576B (en) 2019-02-28 2019-02-28 Method for assisting drawing by referring to gray level diagram

Publications (2)

Publication Number Publication Date
CN109849576A true CN109849576A (en) 2019-06-07
CN109849576B CN109849576B (en) 2020-04-28

Family

ID=66899239

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910149284.3A Active CN109849576B (en) 2019-02-28 2019-02-28 Method for assisting drawing by referring to gray level diagram

Country Status (1)

Country Link
CN (1) CN109849576B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110312139A (en) * 2019-06-18 2019-10-08 深圳前海达闼云端智能科技有限公司 The method and apparatus of image transmitting, storage medium
CN110322529A (en) * 2019-07-12 2019-10-11 电子科技大学 A method of it is painted based on deep learning aided art
CN111043953A (en) * 2019-10-17 2020-04-21 杭州电子科技大学 Two-dimensional phase unwrapping method based on deep learning semantic segmentation network
CN111325212A (en) * 2020-02-18 2020-06-23 北京奇艺世纪科技有限公司 Model training method and device, electronic equipment and computer readable storage medium
CN112001839A (en) * 2020-07-23 2020-11-27 浙江大学 Cross-domain image conversion method based on semantic feature transformation, computer device and storage medium
CN113111906A (en) * 2021-02-24 2021-07-13 浙江大学 Method for generating confrontation network model based on condition of single pair image training
CN116664773A (en) * 2023-06-02 2023-08-29 北京元跃科技有限公司 Method and system for generating 3D model by multiple paintings based on deep learning

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2905724A2 (en) * 2014-02-07 2015-08-12 Tata Consultancy Services Limited Object detection system and method
CN107944457A (en) * 2017-11-23 2018-04-20 浙江清华长三角研究院 Drawing object identification and extracting method under a kind of complex scene
CN107945244A (en) * 2017-12-29 2018-04-20 哈尔滨拓思科技有限公司 A kind of simple picture generation method based on human face photo
US10178982B2 (en) * 2015-07-29 2019-01-15 Perkinelmer Health Sciences, Inc. System and methods for automated segmentation of individual skeletal bones in 3D anatomical images

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2905724A2 (en) * 2014-02-07 2015-08-12 Tata Consultancy Services Limited Object detection system and method
US10178982B2 (en) * 2015-07-29 2019-01-15 Perkinelmer Health Sciences, Inc. System and methods for automated segmentation of individual skeletal bones in 3D anatomical images
CN107944457A (en) * 2017-11-23 2018-04-20 浙江清华长三角研究院 Drawing object identification and extracting method under a kind of complex scene
CN107945244A (en) * 2017-12-29 2018-04-20 哈尔滨拓思科技有限公司 A kind of simple picture generation method based on human face photo

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110312139A (en) * 2019-06-18 2019-10-08 深圳前海达闼云端智能科技有限公司 The method and apparatus of image transmitting, storage medium
CN110322529A (en) * 2019-07-12 2019-10-11 电子科技大学 A method of it is painted based on deep learning aided art
CN111043953A (en) * 2019-10-17 2020-04-21 杭州电子科技大学 Two-dimensional phase unwrapping method based on deep learning semantic segmentation network
CN111043953B (en) * 2019-10-17 2021-07-27 杭州电子科技大学 Two-dimensional phase unwrapping method based on deep learning semantic segmentation network
CN111325212A (en) * 2020-02-18 2020-06-23 北京奇艺世纪科技有限公司 Model training method and device, electronic equipment and computer readable storage medium
CN112001839A (en) * 2020-07-23 2020-11-27 浙江大学 Cross-domain image conversion method based on semantic feature transformation, computer device and storage medium
CN112001839B (en) * 2020-07-23 2022-09-13 浙江大学 Cross-domain image conversion method based on semantic feature transformation, computer device and storage medium
CN113111906A (en) * 2021-02-24 2021-07-13 浙江大学 Method for generating confrontation network model based on condition of single pair image training
CN113111906B (en) * 2021-02-24 2022-07-12 浙江大学 Method for generating confrontation network model based on condition of single pair image training
CN116664773A (en) * 2023-06-02 2023-08-29 北京元跃科技有限公司 Method and system for generating 3D model by multiple paintings based on deep learning
CN116664773B (en) * 2023-06-02 2024-01-16 北京元跃科技有限公司 Method and system for generating 3D model by multiple paintings based on deep learning

Also Published As

Publication number Publication date
CN109849576B (en) 2020-04-28

Similar Documents

Publication Publication Date Title
CN109849576A (en) A kind of method of reference gray level figure auxiliary drawing
CN110414519A (en) A kind of recognition methods of picture character and its identification device
CN108830912A (en) A kind of interactive grayscale image color method of depth characteristic confrontation type study
CN109190722A (en) Font style based on language of the Manchus character picture migrates transform method
CN110458750A (en) A kind of unsupervised image Style Transfer method based on paired-associate learning
CN110222722A (en) Interactive image stylization processing method, calculates equipment and storage medium at system
CN109598279A (en) Based on the zero sample learning method for generating network from coding confrontation
CN108986132A (en) A method of certificate photo Trimap figure is generated using full convolutional neural networks
CN106023276A (en) Pencil drawing making method and pencil drawing making device based on image processing
CN102289991A (en) Visual-variable-based automatic classification and configuration method of map lettering
CN106960457A (en) A kind of colored paintings creative method extracted and scribbled based on image, semantic
Xing et al. Diffsketcher: Text guided vector sketch synthesis through latent diffusion models
CN109102457A (en) A kind of intelligent color change system and method based on convolutional neural networks
CN115470365B (en) Fine-grained cross-media retrieval method based on depth metric learning
CN113887737B (en) Automatic sample set generation method based on machine learning
CN110298365A (en) A kind of theme color extracting method based on human eye vision
CN114493997A (en) Terrain wash painting generation method based on digital elevation model and style migration
Li et al. Lbwgan: Label based shape synthesis from text with wgans
Han et al. Innovative design of traditional calligraphy costume patterns based on deep learning
CN104268832B (en) Gradient mesh recoloring method
Wu et al. TEACHING KNOWLEDGE OF THE CHINESE NATIONAL ORCHESTRA AT THE SICHUAN CONSERVATORY OF MUSIC IN CHINA.
Yao et al. Intelligent Data Collection and Aesthetic Activities-From the Aesthetic Perspective of Art Works
CN118097085B (en) Method and system for automatically migrating topographic patterns and styles
Xu et al. Comparing the design quality and efficiency between design intelligence and intermediate designers
Su et al. Cartoon image colorization based on emotion recognition and superpixel color resolution

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant