CN107103635A - Image layout color matching method - Google Patents

Image layout color matching method Download PDF

Info

Publication number
CN107103635A
CN107103635A CN201710165372.3A CN201710165372A CN107103635A CN 107103635 A CN107103635 A CN 107103635A CN 201710165372 A CN201710165372 A CN 201710165372A CN 107103635 A CN107103635 A CN 107103635A
Authority
CN
China
Prior art keywords
mrow
msub
template
image
color matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710165372.3A
Other languages
Chinese (zh)
Inventor
孟平
孟一平
唐帆
董未名
张晓鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Automation of Chinese Academy of Science
Original Assignee
Institute of Automation of Chinese Academy of Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Automation of Chinese Academy of Science filed Critical Institute of Automation of Chinese Academy of Science
Priority to CN201710165372.3A priority Critical patent/CN107103635A/en
Publication of CN107103635A publication Critical patent/CN107103635A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present invention relates to a kind of image layout color matching method, comprise the following steps:Step 01, optimal text point region is chosen for character image in original image, the optimal text point region is the relatively low band of position of visual saliency;Step 02, it is word color matching in the character image;Step 03, output includes the original image of the character image after color matching.In the present invention, realize and easily export satisfied picture while lifting the aesthetic quality of picture.

Description

Image layout color matching method
Technical field
The invention belongs to digital image processing techniques field, be related to the word of computer graphics and Digital Image Processing with A kind of mixed composition technology of image, and in particular to image layout color matching method.
Background technology
As user is constantly lifted to the individual requirement that image is propagated, image is fused into propagate at present with text New model.Automatically being added for user picture turns into a new direction --- the vision of image procossing with watermark word attractive in appearance The Automated Design of media.
The mixed composition of word and image is one of Automatic Typesetting technology urgent problem to be solved.Work main sides in the past Overweight the Automatic Typesetting problem to character block and image under solid background.This kind of typesetting problem is based on grid (Grid) in design Concept.Worked with forward part and solve the problems, such as to add the typesetting of other elements in the uniform background region of image.But these work It is to handle the aesthstic typesetting problem that single text is covered on the picture of uniform background by setting up computation model.
The content of the invention
In order to solve above mentioned problem of the prior art, the present invention proposes a kind of image layout color matching method, to realize Satisfied picture is easily exported while lifting the aesthetic quality of picture.
This method comprises the following steps:
Step 01, optimal text point region, the optimal text point area are chosen for character image in original image Domain is the relatively low band of position of visual saliency;
Step 02, it is word color matching in the character image;
Step 03, output includes the original image of the character image after color matching.
Preferably, when choosing optimal text point region, the calculation formula of visual saliency is:
HOFsal(p)=max (1, Hsal(p)+Obj(p)+Face(p))
Wherein, HOFsal(p) it is visual saliency, p is a pixel in any original image, Hsal(p) it is each The layering notable figure of pixel, Obj(p) it is the object area sign picture of each pixel, Face(p) it is the facial zone table of each pixel Diagram, Hsal(p)、ObjAnd F (p)ace(p) span is between 0~1.
Preferably, when choosing the optimal text point region for character image, the formula for calculating text point is:
Wherein, R*For optimal text point region, (x, y) is that the rectangular area upper left corner pinpoints coordinate, and w is rectangle frame width Degree, h is rectangle frame height, Tsp(i, j) is the space importance in (i, j) place pixel.
Preferably, the form and aspect histogram calculation formula of the visual saliency is:
Wherein, M*For form and aspect histogram, the form and aspect histogram M*For original image in chrominance space all pixels point p Saturation degree and visual saliency value significance weighted distribution, H represents color component,.Represent two matrix correspondence positions The product of element.
Preferably, after normalized, the histogrammic formula of form and aspect is:
Wherein, M is the form and aspect histogram after normalized.
Preferably, the step 02 is specially:
Select with the immediate template of original image color substep, the original image is carried out according to the template harmonious Change is handled, and is matched colors accordingly for the word.
Preferably, in modeling plate, harmonious loss of the original image under each template is calculated, formula is:
Wherein, F represents harmonious loss, and I represents original image, and T represents template, and H represents color component, and // // represents certain It is of the same colour be transferred to the interval minimum colour wheel of given template away from;Tsp(p) pixel p space importance is represented.
Preferably, in modeling plate, harmonious loss of the original image under each template is calculated, formula is:
Wherein, F represents harmonious loss, and I represents original image, TmExpression template, m ∈ { X, Y, V, T }, α represents template Any anglec of rotation.
Preferably, the selection strategy of the template is:
B (I)=(m, α)
Preferably, the selection strategy of the template includes polymerization and divided, and is specially:
Division:T templates are split into X templates, and V templates are split into Y templates, and X, Y template are without division;
Polymerization:T matrix polymerizations are V templates, and X matrix polymerizations are Y templates, and V, Y template are without polymerization.
Compared with prior art, the present invention at least has advantages below:
By image layout color matching method in the present invention, realize and easily export satisfied picture while lifting picture Aesthetic quality.
Brief description of the drawings
Fig. 1 is the schematic flow sheet of image layout color matching method provided by the present invention.
Embodiment
The preferred embodiment of the present invention described with reference to the accompanying drawings.It will be apparent to a skilled person that this A little embodiments are used only for explaining the technical principle of the present invention, it is not intended that limit the scope of the invention.
The method of the present invention is broadly divided into two parts:Character area Selection Strategy and text color Selection Strategy.Using base When vision significantly theoretical determination text point with aesthstic composition, need to consider picture material (avoiding being blocked by word) with Typesetting composition skill.Piece image is given, is best suitable for placing and specifies the region of size word to be defined as optimal text point Region.The region should disturb the expression of artwork content as few as possible, and designing picture composition principle is met as much as possible.View-based access control model When contrast and harmonious in colourization principle determine text color, it need to ensure that artwork color is harmonious as far as possible, word color and artwork Produce contrast as big as possible.
It will be detailed below for the specific algorithm of each step.
The invention discloses a kind of image layout color matching method, as shown in figure 1, comprising the following steps:
Step 01, optimal text point region is chosen in original image for character image.
The optimal text point region is the relatively low band of position of visual saliency.In order to ensure that word is as few as possible The expression of artwork content is disturbed, character area should not be positioned over the region that artwork observer may be interested, in order to avoid cause to regard Feel is obscured, if this obscure inevitable, should try one's best reduction, i.e., should be positioned over the relatively low region of vision significance.Conventional regards Contrast of the pixel with periphery background in terms of color, brightness, direction being considered feel conspicuousness more.In addition, such as " people in image Face ", " human body " or other familiar objects also should be by particular concerns, because being likely to result in when word is positioned over into these regions " visual noise " and then the effective transmission for influenceing information.
When choosing optimal text point region, the calculation formula of visual saliency is:
HOFsal(p)=max (1, Hsal(p)+Obj(p)+Face(p))
Wherein, HOFsal(p) it is visual saliency, p is a pixel in any original image, Hsal(p) it is each The layering notable figure of pixel, Obj(p) it is the object area sign picture of each pixel, Face(p) it is the facial zone table of each pixel Diagram, Hsal(p)、ObjAnd F (p)ace(p) span is between 0~1.When conventional object or people are not present in image During face region, ObjAnd F (p)ace(p) it is 0;When three adds up more than 1, it is believed that with stronger conspicuousness, by 1 processing, So HOFsal(p) span is also 0~1, and each notable figure concrete meaning and computational methods are as follows:
1) it is layered notable figure:The present invention reduces small yardstick and high-contrast area to conspicuousness using multi-level simulation tool method Detect the influence caused.Image is divided into 3 layers by this method, will be processed into different size of hyperfractionated image block, shape per tomographic image Into tree structure, then the method inferred by level draws Saliency maps.Each element span 0~1 in notable figure is layered, Value is bigger, and the expression region significance is higher.
2) object area mark figure:Conventional object present in diagram is used to refer to, is extracted using Faster RCNN common Object area information, then obtains object area mark figure with reference to Grab Cut.Handling process is one width figure of input, is used Faster RCNN obtain the outer encirclement frame of object area, and the region representation in inframe portion may be prospect or background, and outer frame portion is Non- prospect, using this frame as Grab Cut, input can obtain object area mark figure.Each element takes in object area mark figure It is worth scope 0~1, value is bigger to represent that the region there is a possibility that object is higher.
3) facial zone mark figure:Face datection is carried out, if detecting human face region, using Grab Cut according to face Regional structure Face Map, flow is similar with above-mentioned object area mark figure.Each element value model in facial zone mark figure 0~1 is enclosed, value is bigger to represent that the region there is a possibility that face is higher.
The optimal position of superposition word is calculated automatic, not only to consider that the table of artwork content should be disturbed as few as possible Reach, designing picture composition principle is also met as much as possible.The spatial locality metric template that the present invention is proposed using khan.The template Based on three points of composition methods, it is believed that the point in transverse and longitudinal 3-point line is important, in addition, special consideration image reform and up and down Three quantiles.Brighter area identification relevant position is more notable in template, and darker place represents more inessential., will during calculating The template zooms to identical with artwork size, and each element is zoomed into 0~1, uses Tsp(i, j) is represented in (i, j) place point Space importance.
Summary, for any original image Im*nIf optimal text point region is designated as R*, (x, y) is rectangular area The upper left corner pinpoints coordinate, and w is rectangle width of frame, and h is rectangle frame height.
Above mentioned problem can be reduced to find the maximum submatrix problem of the dimension matrix of non-negative 2, can quickly be asked with dynamic programming algorithm Solution.
For textcolor Selection Strategy, the present invention is primarily based on the HSV form and aspect histograms that above-mentioned significance calculates image, Comprise the following steps that:
Form and aspect histogram M* represents original image Im*nAll pixels point p saturation degree and vision is notable in chrominance space The significance weighted distribution of the value of degree, is defined as:
In above-mentioned formula, H represents color component,Represent the product of two matrix correspondence position elements.
After normalized, the histogrammic formula of form and aspect is:
M is the form and aspect histogram after normalized.
Step 02, it is word color matching in character image.
Specifically, select with the immediate template of original image color substep, according to the template to the original image Harmonization processing is carried out, and is matched colors accordingly for the word.
Secondly, the present invention also to select with the immediate template of artwork COLOR COMPOSITION THROUGH DISTRIBUTION, then utilize matched template The harmonization of artwork is completed, and is matched colors on this basis for word, step is as follows:
According to the definition to image harmony degree, with reference to significance proposed above, by a width original image Im*nIn template T Under harmonious loss be defined as:
Wherein, F represents harmonious loss, and I represents original image, and T represents template, and H represents color component, |/|/represent certain It is of the same colour be transferred to the interval minimum colour wheel of given template away from;Tsp(p) pixel p space importance is represented.
In modeling plate, harmonious loss of the original image under each template is calculated, formula is:
Wherein, F represents harmonious loss, and I represents original image, TmExpression template, m ∈ { X, Y, V, T }, α represents template Any anglec of rotation.
Harmonious degree of the image I under each template is provided, a kind of obvious template Selection Strategy is:
B (I)=(m, α)
Harmonious loss of the image under each template is calculated respectively, chooses the corresponding template of minimum harmony degree.
The present invention proposes a kind of adaptive template matching strategy based on polymerization, division.Wherein,
Splitting rule is:T templates are split into X templates, and V templates are split into Y templates, X, and Y templates are without division;
Polymeric rule is:T matrix polymerizations be V templates, X matrix polymerizations be Y templates, V, Y templates without polymerization.
The adaptive template matching algorithm flow is as follows:
Template to be matched is initialized for T templates;
WHILE templates to be matched change,
Original image I is matched with template to be matched,
Mi average Ave and variance Varp in statistical mask region,
Peak concentration point is calculated according to Ave+3*VarpAve+3*Varp;
IF maximum points number is more than 2,
THEN
According to splitting rule classification model, template to be matched is updated;
ELSE
IF is not present in peak concentration point and template area less than average o'clock more than 70,
THEN
According to polymeric rule polymerizing template, template to be matched is updated.
ELSE matchings terminate.
Finally, herein match colors strategy on the basis of the aesthetic property of color consider text can be readability.Textcolor with Image background color is too similar, influences whether text reading.View-based access control model contrast principle, on the basis of harmonization, using contrast Color strategy is matched colors for text.
For the template (template X and Y) based on complementary colours facies principle, it is first determined the fan of text filed place template area Area, secondly calculates the angular bisector of correspondence sector, and it is textcolor to take color corresponding to the contrastive colours of angular bisector.
For the template (template V and T) based on close form and aspect principle, the angular bisector of direct calculation template sector region, Color corresponding to the contrastive colours of its angular bisector is textcolor.
Step 03, output includes the original image of the character image after color matching.
The characteristic of the method for the present invention and innovation are, it is proposed that a kind of full automatic photo watermark Automatic Typesetting and color matching Method, when providing the user for text point, the design recommendation of color matching, instructs user to complete picture and text and takes with reference to design principle With design.Visual salient region and design science relative theory based on image, optimize to character block and are laid out.Based on this, it is sharp With harmonious in colourization principle, lift picture aesthetic quality and matched colors using visual contrast principle for word.
So far, combined preferred embodiment shown in the drawings describes technical scheme, still, this area Technical staff is it is easily understood that protection scope of the present invention is expressly not limited to these embodiments.Without departing from this On the premise of the principle of invention, those skilled in the art can make equivalent change or replacement to correlation technique feature, these Technical scheme after changing or replacing it is fallen within protection scope of the present invention.

Claims (10)

1. a kind of image layout color matching method, it is characterised in that comprise the following steps:
Step 01, optimal text point region is chosen for character image in original image, the optimal text point region is The relatively low band of position of visual saliency;
Step 02, it is word color matching in the character image;
Step 03, output includes the original image of the character image after color matching.
2. image layout color matching method according to claim 1, it is characterised in that when choosing optimal text point region, The calculation formula of visual saliency is:
HOFsal(p)=max (1, Hsal(p)+Obj(p)+Face(p))
Wherein, HOFsal(p) it is visual saliency, p is a pixel in any original image, Hsal(p) it is each pixel Layering notable figure, Obj(p) it is the object area sign picture of each pixel, Face(p) represented for the facial zone of each pixel Figure, Hsal(p)、ObjAnd F (p)ace(p) span is between 0~1.
3. image layout color matching method according to claim 2, it is characterised in that choosing the optimal text for character image During word location region, the formula for calculating text point is:
<mrow> <msup> <mi>R</mi> <mo>*</mo> </msup> <mo>=</mo> <mi>arg</mi> <munder> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> <mi>R</mi> </munder> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mi>x</mi> <mo>,</mo> <mi>j</mi> <mo>=</mo> <mi>y</mi> </mrow> <mrow> <mi>x</mi> <mo>+</mo> <mi>w</mi> <mo>,</mo> <mi>y</mi> <mo>+</mo> <mi>h</mi> </mrow> </munderover> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <msub> <mi>HOF</mi> <mrow> <mi>s</mi> <mi>a</mi> <mi>l</mi> </mrow> </msub> <mo>(</mo> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mo>)</mo> <mo>+</mo> <msub> <mi>T</mi> <mrow> <mi>s</mi> <mi>p</mi> </mrow> </msub> <mo>(</mo> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mo>)</mo> <mo>)</mo> </mrow> </mrow>
Wherein, R*For optimal text point region, (x, y) is that the rectangular area upper left corner pinpoints coordinate, and w is rectangle width of frame, and h is Rectangle frame height, Tsp(i, j) is the space importance in (i, j) place pixel.
4. image layout color matching method according to claim 2, it is characterised in that the form and aspect histogram of the visual saliency Calculation formula is:
Wherein, M*For form and aspect histogram, the form and aspect histogram M*For original image in chrominance space all pixels point p it is full With degree and the distribution of the significance weighted of the value of visual saliency, H represents color component,Represent two matrix correspondence position elements Product.
5. image layout color matching method according to claim 4, it is characterised in that after normalized, the form and aspect Nogata The formula of figure is:
<mrow> <mi>M</mi> <mo>=</mo> <mrow> <mo>{</mo> <mrow> <msub> <mi>M</mi> <mi>i</mi> </msub> <mo>/</mo> <msub> <mi>M</mi> <mi>i</mi> </msub> <mo>=</mo> <mfrac> <msubsup> <mi>M</mi> <mi>i</mi> <mo>*</mo> </msubsup> <mrow> <msubsup> <mi>&amp;Sigma;M</mi> <mi>i</mi> <mo>*</mo> </msubsup> </mrow> </mfrac> </mrow> <mo>}</mo> </mrow> </mrow>
Wherein, M is the form and aspect histogram after normalized.
6. image layout color matching method according to claim 2, it is characterised in that the step 02 is specially:
Select with the immediate template of original image color substep, the original image is carried out at harmonization according to the template Reason, and matched colors accordingly for the word.
7. image layout color matching method according to claim 6, it is characterised in that in modeling plate, calculates the original graph As the harmonious loss under each template, formula is:
<mrow> <mi>F</mi> <mrow> <mo>(</mo> <mi>I</mi> <mo>,</mo> <mi>T</mi> <mo>)</mo> </mrow> <mo>=</mo> <munder> <mi>&amp;Sigma;</mi> <mrow> <mi>p</mi> <mo>&amp;Element;</mo> <mi>I</mi> </mrow> </munder> <mo>/</mo> <mo>/</mo> <mi>H</mi> <mrow> <mo>(</mo> <mi>p</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>T</mi> <mrow> <mi>s</mi> <mi>p</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>p</mi> <mo>)</mo> </mrow> <mo>/</mo> <mo>/</mo> <mo>*</mo> <msub> <mi>HOF</mi> <mrow> <mi>s</mi> <mi>a</mi> <mi>l</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>p</mi> <mo>)</mo> </mrow> </mrow>
Wherein, F represents harmonious loss, and I represents original image, and T represents template, and H represents color component, and // // represents that certain is of the same colour Be transferred to the interval minimum colour wheel of given template away from;Tsp(p) pixel p space importance is represented.
8. image layout color matching method according to claim 6, it is characterised in that in modeling plate, calculates the original graph As the harmonious loss under each template, formula is:
<mrow> <msub> <mi>F</mi> <mi>m</mi> </msub> <mrow> <mo>(</mo> <mi>I</mi> <mo>,</mo> <msub> <mi>T</mi> <mi>m</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mi>arg</mi> <munder> <mi>min</mi> <mi>&amp;alpha;</mi> </munder> <mi>F</mi> <mo>(</mo> <mrow> <mi>I</mi> <mo>,</mo> <msub> <mi>T</mi> <mi>m</mi> </msub> <mrow> <mo>(</mo> <mi>&amp;alpha;</mi> <mo>)</mo> </mrow> </mrow> <mo>)</mo> </mrow>
Wherein, F represents harmonious loss, and I represents original image, TmTemplate is represented, m ∈ { X, Y, V, T }, α represents any rotation of template Gyration.
9. according to the described image typesetting color matching method of claim 7 or 8, it is characterised in that the selection strategy of the template is:
B (I)=(m, α)
<mrow> <mo>(</mo> <mi>m</mi> <mo>,</mo> <mi>&amp;alpha;</mi> <mo>)</mo> <mo>=</mo> <mi>arg</mi> <munder> <mi>min</mi> <mi>m</mi> </munder> <msub> <mi>F</mi> <mi>m</mi> </msub> <mo>(</mo> <mi>I</mi> <mo>)</mo> </mrow>
10. image layout color matching method according to claim 9, it is characterised in that the selection strategy of the template includes poly- Close and divide, be specially:
Division:T templates are split into X templates, and V templates are split into Y templates, and X, Y template are without division;
Polymerization:T matrix polymerizations are V templates, and X matrix polymerizations are Y templates, and V, Y template are without polymerization.
CN201710165372.3A 2017-03-20 2017-03-20 Image layout color matching method Pending CN107103635A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710165372.3A CN107103635A (en) 2017-03-20 2017-03-20 Image layout color matching method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710165372.3A CN107103635A (en) 2017-03-20 2017-03-20 Image layout color matching method

Publications (1)

Publication Number Publication Date
CN107103635A true CN107103635A (en) 2017-08-29

Family

ID=59675356

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710165372.3A Pending CN107103635A (en) 2017-03-20 2017-03-20 Image layout color matching method

Country Status (1)

Country Link
CN (1) CN107103635A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110009712A (en) * 2019-03-01 2019-07-12 华为技术有限公司 A kind of picture and text composition method and its relevant apparatus
CN110264545A (en) * 2019-06-19 2019-09-20 北京字节跳动网络技术有限公司 Picture Generation Method, device, electronic equipment and storage medium
CN110706310A (en) * 2019-08-23 2020-01-17 华为技术有限公司 Image-text fusion method and device and electronic equipment
CN114491092A (en) * 2022-01-26 2022-05-13 深圳市前海手绘科技文化有限公司 Method and system for recommending materials according to document content and color matching
CN117669493A (en) * 2023-12-08 2024-03-08 安徽省医学情报研究所 Intelligent image-text typesetting method and system based on significance detection

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101253759A (en) * 2005-05-30 2008-08-27 富士胶片株式会社 Album creating apparatus, album creating method and computer readable medium

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101253759A (en) * 2005-05-30 2008-08-27 富士胶片株式会社 Album creating apparatus, album creating method and computer readable medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
孟一平 等: "照片水印自动排版与配色", 《中国图象图形学报》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110009712A (en) * 2019-03-01 2019-07-12 华为技术有限公司 A kind of picture and text composition method and its relevant apparatus
WO2020177584A1 (en) * 2019-03-01 2020-09-10 华为技术有限公司 Graphic typesetting method and related device
EP3910598A4 (en) * 2019-03-01 2022-03-30 Huawei Technologies Co., Ltd. Graphic typesetting method and related device
US11790584B2 (en) 2019-03-01 2023-10-17 Huawei Technologies Co., Ltd. Image and text typesetting method and related apparatus thereof
CN110264545A (en) * 2019-06-19 2019-09-20 北京字节跳动网络技术有限公司 Picture Generation Method, device, electronic equipment and storage medium
WO2020253766A1 (en) * 2019-06-19 2020-12-24 北京字节跳动网络技术有限公司 Picture generation method and apparatus, electronic device, and storage medium
CN110706310A (en) * 2019-08-23 2020-01-17 华为技术有限公司 Image-text fusion method and device and electronic equipment
WO2021036715A1 (en) * 2019-08-23 2021-03-04 华为技术有限公司 Image-text fusion method and apparatus, and electronic device
CN110706310B (en) * 2019-08-23 2021-10-22 华为技术有限公司 Image-text fusion method and device and electronic equipment
CN114491092A (en) * 2022-01-26 2022-05-13 深圳市前海手绘科技文化有限公司 Method and system for recommending materials according to document content and color matching
CN114491092B (en) * 2022-01-26 2023-02-10 深圳市前海手绘科技文化有限公司 Method and system for recommending materials according to document contents and color matching
CN117669493A (en) * 2023-12-08 2024-03-08 安徽省医学情报研究所 Intelligent image-text typesetting method and system based on significance detection

Similar Documents

Publication Publication Date Title
CN107103635A (en) Image layout color matching method
CN103971126B (en) A kind of traffic sign recognition method and device
US9691145B2 (en) Methods and systems for automated selection of regions of an image for secondary finishing and generation of mask image of same
CN102567731B (en) Extraction method for region of interest
CN102024156B (en) Method for positioning lip region in color face image
CN102663806B (en) Artistic-vision-based cartoon stylized rendering method of image
CN104794693B (en) A kind of portrait optimization method of face key area automatic detection masking-out
CN103093447B (en) Cutting and splicing method of concentration of pictures of computer
CN108010034A (en) Commodity image dividing method and device
CN106991462A (en) Three-dimensional code generating method
CN104063888B (en) A kind of wave spectrum artistic style method for drafting based on feeling of unreality
CN107945244A (en) A kind of simple picture generation method based on human face photo
CN107031033A (en) It is a kind of can 3D printing hollow out Quick Response Code model generating method and system
CN105023253A (en) Visual underlying feature-based image enhancement method
CN106530317B (en) A kind of scoring of simple picture computer and auxiliary painting methods
CN108427828A (en) A kind of device of automatic assessment planar design placement quality and optimization
CN103946868A (en) Processing method and system for medical images
CN111860369A (en) Fraud identification method and device and storage medium
CN107481206A (en) MIcrosope image background equalization Processing Algorithm
CN103793549B (en) Adopt the computer-aided crewel embroidery production method of fuzzy clustering and random walk
CN107133562A (en) A kind of gesture identification method based on extreme learning machine
CN103778430A (en) Rapid face detection method based on combination between skin color segmentation and AdaBoost
CN104809721B (en) A kind of caricature dividing method and device
CN106846399A (en) A kind of method and device of the vision center of gravity for obtaining image
CN109726633A (en) A kind of face critical point detection method based on look-up table activation primitive

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20170829

WD01 Invention patent application deemed withdrawn after publication