CN113763486A - Dominant hue extraction method, device, electronic device and storage medium - Google Patents

Dominant hue extraction method, device, electronic device and storage medium Download PDF

Info

Publication number
CN113763486A
CN113763486A CN202010485775.8A CN202010485775A CN113763486A CN 113763486 A CN113763486 A CN 113763486A CN 202010485775 A CN202010485775 A CN 202010485775A CN 113763486 A CN113763486 A CN 113763486A
Authority
CN
China
Prior art keywords
image
color value
pixel point
pixel points
preset number
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010485775.8A
Other languages
Chinese (zh)
Other versions
CN113763486B (en
Inventor
杨鼎超
刘易周
汪洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202010485775.8A priority Critical patent/CN113763486B/en
Priority to PCT/CN2020/127558 priority patent/WO2021243955A1/en
Publication of CN113763486A publication Critical patent/CN113763486A/en
Application granted granted Critical
Publication of CN113763486B publication Critical patent/CN113763486B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The disclosure relates to a dominant hue extraction method, a dominant hue extraction device, electronic equipment and a storage medium, and belongs to the technical field of computers. The method comprises the following steps: simultaneously executing the following operations for each first pixel point: according to the distance between the reference pixel point and the first pixel point, the color value of the reference pixel point and the color value of the first pixel point, the color value of any mixed reference pixel point in the first image is obtained, the efficiency of processing the image can be improved, then the first image is processed according to the color value of each mixed first pixel point in the first image, the second image is generated, the color value of each pixel point in the second image is more uniform, the color value of at least one pixel point is extracted from the second image, the color value of at least one pixel point is determined to be the keytone of the first image, the extracted keytone more accords with the characteristics of the first image, the accuracy rate of extracting the keytone is improved, the processing time is saved, and the extraction efficiency of the keytone is improved.

Description

Dominant hue extraction method, device, electronic device and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method and an apparatus for dominant hue extraction, an electronic device, and a storage medium.
Background
With the rapid development of computer technology, image processing methods are increasing, and a common image processing method is to extract a dominant hue in an image. Because the colors of all the pixel points in the image are different, how to extract the dominant hue in the image becomes an urgent problem to be solved.
In the related technology, a target image is obtained through a CPU, each pixel point in the target image is traversed, the color characteristic value of each pixel point is extracted, the color characteristic value with the largest pixel point is obtained according to the color characteristic value of each pixel point, and the color corresponding to the color characteristic value is used as the main tone of the target image. However, the above method needs to traverse each pixel point in the target image, the processing time is long, and the efficiency of extracting the dominant hue is low.
Disclosure of Invention
The present disclosure provides a dominant hue extraction method, apparatus, electronic device, and storage medium, which can make the extracted dominant hue more consistent with the features of a first image, improve the accuracy of dominant hue extraction, save processing time, and improve the extraction efficiency of dominant hue.
According to a first aspect of embodiments of the present disclosure, there is provided a method of dominant hue extraction, the method comprising:
acquiring a first image, wherein the first image comprises a plurality of first pixel points;
simultaneously executing the following operations for each first pixel point: acquiring a color value of the mixed reference pixel points according to the distance between the reference pixel points and the first pixel points, the color value of the reference pixel points and the color value of the first pixel points, wherein the reference pixel points are any pixel points in the first image;
processing the first image according to the color value of each mixed first pixel point in the first image to generate a second image;
and extracting the color value of at least one pixel point from the second image, and determining the color value of the at least one pixel point as the dominant hue of the first image.
In one possible implementation, the following operations are performed simultaneously for each first pixel: obtaining the color value of the mixed reference pixel points according to the distance between the reference pixel points and the first pixel points, the color value of the reference pixel points and the color value of the first pixel points, wherein the reference pixel points are any pixel point in the first image, and the method comprises the following steps:
simultaneously executing the following operations for each first pixel point: according to the distance between the reference pixel point and the first pixel point, respectively mixing the color value of the first pixel point and the color value of the reference pixel point, and taking the mixed color value of the first pixel point as the mixed color value of the first pixel point relative to the reference pixel point;
and taking the sum of the mixed color values of each first pixel point relative to the reference pixel point as the mixed color value of the reference pixel point.
In another possible implementation, the acquiring the first image includes,
dividing the third image into a first preset number of image areas, wherein the size of each image area is the same;
and simultaneously carrying out down-sampling treatment on the image areas with the first preset number to obtain first pixel points with the first preset number, and forming the first image by the first pixel points with the first preset number.
In another possible implementation manner, the performing down-sampling processing on the first preset number of image regions at the same time to obtain the first preset number of first pixel points, and configuring the first preset number of first pixel points into the first image includes:
determining the color value of each image area according to the color value of the pixel point included in each image area, and taking the color value of the first pixel point corresponding to each image area as the color value;
and creating a first image containing the plurality of first pixel points according to the determined color values of the plurality of first pixel points.
In another possible implementation manner, the determining, according to the color value of the pixel point included in each image region, the color value of each image region as the color value of the first pixel point corresponding to each image region includes:
simultaneously obtaining the average value of the color values of the pixel points in each image area;
and taking the average value of the color values of the pixel points in each image area as the color value of the first pixel point corresponding to each image area in the first image.
In another possible implementation manner, the dividing the third image into a first preset number of image areas includes:
determining a first size of the divided image area according to the size of the third image and the first preset number;
and dividing an image area satisfying the first size from the third image according to the first size.
In another possible implementation manner, the extracting a color value of at least one pixel point from the second image includes:
dividing the second image into a second preset number of image areas, wherein the sizes of the second preset number of image areas are the same;
extracting any pixel point from each image area in the second preset number of image areas to obtain the second preset number of pixel points;
and extracting the color values of the second preset number of pixel points.
In another possible implementation manner, the dividing the second image into a second preset number of image areas includes:
determining a second size of the divided image area according to the size of the second image and the second preset number;
and dividing an image area satisfying the second size from the second image according to the second size.
In another possible implementation manner, the determining the color value of the at least one pixel point as the dominant hue of the first image includes:
and respectively taking the color value of each pixel point extracted from the second image as the dominant hue of the corresponding region of the image region where each pixel point is located in the first image.
According to a second aspect of the embodiments of the present disclosure, there is provided a dominant hue extraction apparatus, the apparatus including:
the image acquisition unit is used for acquiring a first image, and the first image comprises a plurality of first pixel points;
a color value obtaining unit, configured to perform the following operations for each first pixel point simultaneously: acquiring a color value of the mixed reference pixel points according to the distance between the reference pixel points and the first pixel points, the color value of the reference pixel points and the color value of the first pixel points, wherein the reference pixel points are any pixel points in the first image;
the generating unit is used for processing the first image according to the color value of each mixed first pixel point in the first image to generate a second image;
and the extraction unit is used for extracting the color value of at least one pixel point from the second image and determining the color value of the at least one pixel point as the dominant hue of the first image.
In another possible implementation manner, the color value obtaining unit includes:
a mixing subunit, configured to perform the following operations for each first pixel point simultaneously: according to the distance between the reference pixel point and the first pixel point, respectively mixing the color value of the first pixel point and the color value of the reference pixel point, and taking the mixed color value of the first pixel point as the mixed color value of the first pixel point relative to the reference pixel point;
and the determining subunit is used for taking the sum of the mixed color values of each first pixel point relative to the reference pixel point as the color value of the reference pixel point after mixing.
In another possible implementation manner, the image acquisition unit includes,
the first dividing unit is used for dividing the third image into a first preset number of image areas, and the size of each image area is the same;
and the processing subunit is configured to perform downsampling processing on the first preset number of image regions simultaneously to obtain a first preset number of first pixel points, and configure the first preset number of first pixel points into the first image.
In another possible implementation manner, the processing subunit is configured to:
determining the color value of each image area according to the color value of the pixel point included in each image area, and taking the color value of the first pixel point corresponding to each image area as the color value;
and creating a first image containing the plurality of first pixel points according to the determined color values of the plurality of first pixel points.
In another possible implementation manner, the processing subunit is configured to:
simultaneously obtaining the average value of the color values of the pixel points in each image area;
and taking the average value of the color values of the pixel points in each image area as the color value of the first pixel point corresponding to each image area in the first image.
In another possible implementation manner, the first molecular dividing unit is configured to:
determining a first size of the divided image area according to the size of the third image and the first preset number;
and dividing an image area satisfying the first size from the third image according to the first size.
In another possible implementation manner, the extraction unit includes:
the second dividing subunit is used for dividing the second image into a second preset number of image areas, and the sizes of the second preset number of image areas are the same;
a pixel point extracting subunit, configured to extract any pixel point from each of the second preset number of image regions to obtain a second preset number of pixel points;
and the color value extracting subunit is used for extracting the color values of the pixels of the second preset number.
In another possible implementation manner, the second dividing subunit is configured to:
determining a second size of the divided image area according to the size of the second image and the second preset number;
and dividing an image area satisfying the second size from the second image according to the second size.
In another possible implementation manner, the second preset number of image areas have corresponding image areas in the first image, and the extracting unit is configured to:
and respectively taking the color value of each pixel point extracted from the second image as the dominant hue of the corresponding region of the image region where each pixel point is located in the first image.
According to a third aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including:
one or more processors;
volatile or non-volatile memory for storing the one or more processor-executable commands;
wherein the one or more processors are configured to perform the dominant hue extraction method as described in the first aspect.
According to a fourth aspect provided by embodiments of the present disclosure, there is provided a non-transitory computer-readable storage medium, wherein instructions of the storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the dominant hue extraction method according to the first aspect.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program product, wherein the instructions of the computer program product, when executed by a processor of an electronic device, enable the electronic device to perform the dominant hue extraction method as described in the first aspect.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
the method, the device, the electronic device and the storage medium provided by the embodiment of the application obtain a first image, wherein the first image comprises a plurality of first pixel points, and the following operations are simultaneously executed for each first pixel point: acquiring the color value of any reference pixel point in the first image after mixing according to the distance between the reference pixel point and the first pixel point, the color value of the reference pixel point and the color value of the first pixel point, the efficiency of processing the image can be improved, the first image is processed according to the color value of each mixed first pixel point in the first image to generate a second image, the color value of each pixel point in the second image is more uniform, the color value of at least one pixel point is extracted from the second image, the color value of at least one pixel point is determined as the dominant hue of the first image, the color values of the pixel points of the first image are mixed, the extracted dominant hue can be more consistent with the characteristics of the first image, the accuracy of dominant hue extraction is improved, the processing time is saved, and the extraction efficiency of the dominant hue is improved.
Moreover, by obtaining the average value of the color values of the pixels in each image area, the color values of the first pixels corresponding to the determined image areas are more uniform, and the accuracy of the color values of the determined first pixels can be improved. And the first image is obtained by performing downsampling processing on the first preset number of image areas of the third image, so that the efficiency of extracting the dominant hue of the image can be improved, and the data volume of the processing is reduced.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a flow diagram illustrating a method of dominant hue extraction according to an exemplary embodiment.
FIG. 2 is a flow diagram illustrating a method of dominant hue extraction according to an exemplary embodiment.
FIG. 3 is a schematic diagram illustrating a division of a third image according to an example embodiment.
Fig. 4 is a diagram illustrating an acquisition of an average value of color feature values of a third image according to an exemplary embodiment.
Fig. 5 is a schematic diagram illustrating a down-sampling process according to an example embodiment.
Fig. 6 is a schematic diagram illustrating one type of interpolation process according to an exemplary embodiment.
FIG. 7 is a schematic diagram illustrating one type of hue extraction according to an exemplary embodiment.
Fig. 8 is a schematic structural diagram illustrating a dominant hue extraction apparatus according to an exemplary embodiment.
Fig. 9 is a schematic structural diagram showing another dominant hue extraction apparatus according to an exemplary embodiment.
Fig. 10 is a block diagram illustrating a terminal according to an example embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely as set forth in the appended claims
The embodiment of the disclosure provides a dominant hue extraction method, which obtains a first image, and simultaneously executes the following operations for each first pixel point in the first image: the method comprises the steps of obtaining a color value after mixing of reference pixels according to the distance between the reference pixels and first pixels, the color value of the reference pixels and the color value of the first pixels, processing a first image according to the color value after mixing of each first pixel in the first image, generating a second image, extracting a color characteristic value of at least one pixel from the second image, determining the color value of at least one pixel as the dominant hue of the first image, and applying the dominant hue to various scenes.
For example, the method provided by the embodiment of the present disclosure is applied in an image classification scene, and when a terminal needs to classify a plurality of images, the method provided by the embodiment of the present disclosure is adopted, so that the dominant hue of each image can be obtained, and then the plurality of images are classified according to the dominant hue of each image.
Or, for example, the method provided by the embodiment of the present disclosure is applied in an image search scene, and when a terminal needs to search for an image, the method provided by the embodiment of the present disclosure is adopted, so that the dominant hue of each image can be obtained, an image with the same dominant hue as the dominant hue to be searched is searched, and a search result is obtained.
The dominant hue extraction method provided by the embodiment of the disclosure is applied to the terminal. The terminal can be various terminals such as a mobile phone, a tablet computer, a computer and the like.
In one possible implementation, the terminal includes a GPU (Graphics Processing Unit), which is a processor in the terminal for performing a drawing operation, and the GPU may be used to process data in parallel, so as to increase the rate of Processing data.
Fig. 1 is a flowchart illustrating a method of dominant hue extraction according to an exemplary embodiment, referring to fig. 1, the method comprising:
in step 101, a first image is obtained, where the first image includes a plurality of first pixel points.
In step 102, the following operations are performed simultaneously for each first pixel: and acquiring the color value of the mixed reference pixel points according to the distance between the reference pixel points and the first pixel points, the color value of the reference pixel points and the color value of the first pixel points, wherein the reference pixel points are any pixel points in the first image.
In step 103, the first image is processed according to the color value of each mixed first pixel in the first image, and a second image is generated.
In step 104, a color value of at least one pixel point is extracted from the second image, and the color value of the at least one pixel point is determined as a dominant hue of the first image.
The method provided by the embodiment of the disclosure obtains a first image, the first image includes a plurality of first pixel points, and the following operations are simultaneously executed for each first pixel point: acquiring the color value of any reference pixel point in the first image after mixing according to the distance between the reference pixel point and the first pixel point, the color value of the reference pixel point and the color value of the first pixel point, the efficiency of processing the image can be improved, the first image is processed according to the color value of each mixed first pixel point in the first image to generate a second image, the color value of each pixel point in the second image is more uniform, the color value of at least one pixel point is extracted from the second image, the color value of at least one pixel point is determined as the dominant hue of the first image, the color values of the pixel points of the first image are mixed, the extracted dominant hue can be more consistent with the characteristics of the first image, the accuracy of dominant hue extraction is improved, the processing time is saved, and the extraction efficiency of the dominant hue is improved.
In one possible implementation, the following operations are performed simultaneously for each first pixel: according to the distance between the reference pixel point and the first pixel point, the color value of the reference pixel point and the color value of the first pixel point, the color value after the reference pixel point is mixed is obtained, the reference pixel point is any pixel point in the first image, and the method comprises the following steps:
simultaneously executing the following operations for each first pixel point: according to the distance between the reference pixel point and the first pixel point, respectively mixing the color value of the first pixel point and the color value of the reference pixel point, and taking the mixed color value of the first pixel point as the mixed color value of the first pixel point relative to the reference pixel point;
and taking the sum of the mixed color values of each first pixel point relative to the reference pixel point as the mixed color value of the reference pixel point.
In another possible implementation, a first image is acquired, including,
dividing the third image into a first preset number of image areas, wherein the size of each image area is the same;
and simultaneously, performing down-sampling processing on the image areas with the first preset number to obtain first pixel points with the first preset number, and forming the first image by the first pixel points with the first preset number.
In another possible implementation manner, down-sampling the image regions of the first preset number at the same time to obtain first pixel points of the first preset number, and forming the first pixel points of the first preset number into a first image includes:
simultaneously, determining the color value of each image area according to the color value of the pixel point included in each image area, and taking the color value of the first pixel point corresponding to each image area as the color value;
and creating a first image containing a plurality of first pixel points according to the determined color values of the plurality of first pixel points.
In another possible implementation manner, determining a color value of each image region according to a color value of a pixel point included in each image region, as a color value of a first pixel point corresponding to each image region, includes:
simultaneously obtaining the average value of the color values of the pixel points in each image area;
and taking the average value of the color values of the pixel points in each image area as the color value of the first pixel point corresponding to each image area in the first image.
In another possible implementation, dividing the third image into a first preset number of image areas includes:
determining a first size of the divided image area according to the size of the third image and the first preset number;
an image area satisfying the first size is divided from the third image according to the first size.
In another possible implementation manner, the extracting a color value of at least one pixel point from the second image includes:
dividing the second image into a second preset number of image areas, wherein the sizes of the second preset number of image areas are the same;
extracting any pixel point from each image area in a second preset number of image areas to obtain a second preset number of pixel points;
and extracting color values of the second preset number of pixel points.
In another possible implementation, dividing the second image into a second preset number of image areas includes:
determining a second size of the divided image area according to the size of the second image and a second preset number;
and dividing the image area satisfying the second size from the second image according to the second size.
In another possible implementation manner, the determining, by the second preset number of image regions having corresponding image regions in the first image, a color value of at least one pixel point as a dominant hue of the first image includes:
and respectively taking the color value of each pixel point extracted from the second image as the dominant hue of the corresponding region of the image region where each pixel point is located in the first image.
Fig. 2 is a flowchart illustrating a method of extracting a dominant hue, referring to fig. 2, applied in a terminal according to an exemplary embodiment, the method including:
in step 201, the third image is divided into a first preset number of image areas, each having the same size.
Wherein, the third image is any image needing to extract the main tone. The third image may be a landscape image, a person image, or other type of image, etc. In addition, the third image may be obtained by shooting, by searching for images posted by other users on the social platform, by searching, or in other manners.
The method comprises the steps of obtaining a third image, dividing the third image into a first preset number of image areas, wherein the first preset number of image areas are the same in size. In addition, the third image comprises a plurality of pixel points, and the plurality of pixel points in the third image are uniformly distributed, so that the number of the pixel points in the image area with the first preset number is also the same.
The first preset number is set by a terminal or a user, or can be set in other manners. For example, the first predetermined amount may be 4, 6, 8, or other values.
For example, the third image is divided into 1 × 6 image areas, or the third image is divided into 2 × 8 image areas, or the third image is divided into 6 × 1 image areas, or the third image is divided into another number of image areas.
For example, as shown in fig. 3, one third image is divided into 5 image areas from top to bottom.
In one possible implementation, the length and width of the third image are uniformly divided, so that the third image can be divided into a first preset number of image areas.
Optionally, in the process of dividing the third image, the first size of the divided image area is determined according to the size of the third image and the first preset number, and the image area satisfying the first size is divided from the third image according to the first size.
In the process of dividing the third image, after determining the size of the third image and the first preset number, the size of the third image may be divided equally according to the first preset number to obtain a first size, where the first size is the size of each image area divided from the third image, so as to divide the third image into uniform image areas, and the sizes of each divided image area are the same, so that the third image may be divided according to the first size to obtain the first preset number of image areas.
Optionally, in the process of dividing the third image, the third image may be divided in a horizontal manner to obtain a first preset number of regions arranged in a horizontal direction, or the third image may be divided in a vertical manner to obtain a first preset number of regions arranged in a vertical direction, or the third image may be divided in two manners including the horizontal and vertical manners to obtain a first preset number of regions arranged in the horizontal and vertical directions.
In step 202, down-sampling is performed on a first preset number of image regions at the same time to obtain a first preset number of first pixel points, and the first preset number of first pixel points form a first image.
The downsampling process is to reduce the number of pixel points of the image and create a new image with the number of the pixel points reduced. For example, when the third image includes 100 pixels, the third image is down-sampled to obtain a new image including 10 pixels, and the size of the third image is smaller than that of the first image.
When the down-sampling processing is carried out on the image areas with the first preset number, all the image areas are fused into one pixel point at the same time, the first pixel points with the first preset number are obtained, and the first pixel points with the first preset number form a first image.
For example, after the third image is divided into 5 image regions, the 5 image regions are down-sampled to obtain 5 first pixel points, and the 5 first pixel points constitute the first image.
When the first pixels with the first preset number form the first image, the first pixels corresponding to each image area are added into the first image according to the position of each image area in the third image to form the first image.
In a possible implementation manner, the color value of each image area is determined according to the color value of the pixel point included in each image area, the color value of the first pixel point corresponding to each image area is used as the color value of the first pixel point, and the first image including the first pixel points is created according to the determined color values of the first pixel points.
Since the third image includes a first preset number of image regions, and each image region corresponds to one first pixel point in the first image, the first image includes the first preset number of first pixel points.
Wherein, the color value is used for representing the color of the pixel point. The color value may be an RGB (Red Green Blue) value, or the color value may be a pixel value, or the color value may be another type of value, and so on.
Optionally, an average value of color values of the pixel points in each image region is obtained, and the average value of color values of the pixel points in each image region is used as a color value of a first pixel point corresponding to each image region in the first image.
The average value of the color values of the pixel points in each image area is obtained by adopting the following formula:
Figure BDA0002519129750000101
wherein, CiA color average of color values for pixels representing the ith image region,
Figure BDA0002519129750000102
for representing the area of the ith image region, C(u,v)Color value, D, for representing a pixel point at coordinates (u, v)iFor representing a set of i-th picture regions.
For example, for a first image region, the first image region includes 2 pixel points, and color values of the 2 pixel points are (20, 60, 30), (60, 80, 20), respectively, then the average value of the color values of the pixel points in the first image region obtained by using the above formula is (40, 70, 20).
For example, as shown in fig. 4, after the third image is divided into 5 image regions from top to bottom, an average value of color values of pixel points of each image region is obtained, and the first image includes color values of 5 pixel points.
For example, after the third image is divided into 4 image regions in the order from top to bottom, the color values of the acquired 4 image regions are (20, 40, 20), (30, 20, 30), (50, 50, 60) and (100, 20, 20) respectively in the order from top to bottom, and the color values of the first pixels of the first image are (20, 40, 20), (30, 20, 30), (50, 50, 60) and (100, 20, 20) respectively.
In addition, when the resolution of the third image is M × N, and when the down-sampling processing is performed by using the steps in the related art, the time complexity is O (M × N), and by using the steps 201 and 202 provided in this embodiment of the present application, the time complexity is O (1), which is M × N times higher than that of the steps in the related art, the efficiency of the down-sampling processing on the image is improved.
According to the embodiment of the application, the average value of the color values of the pixel points of each image area is obtained, the color values of the first pixel points corresponding to the determined image areas are more uniform, and the accuracy of the color values of the determined first pixel points can be improved. Moreover, the first image is obtained by performing downsampling processing on the third image, so that the efficiency of extracting the dominant tone of the image can be improved, and the data volume of the processing can be reduced.
In the embodiments of the present application, the first image is obtained by performing down-sampling processing on the third image. In another embodiment, the step 201 and the step 202 may not be executed, and the first image may be directly acquired, where the first image may be any image from which the dominant hue needs to be extracted, and the acquisition manner is similar to the manner of acquiring the third image in the step 201.
In one possible implementation manner, when step 201 and step 202 are not executed, the first image in the embodiment of the present application may be a landscape image, a person image, or other types of images. In addition, the first image may be obtained by shooting, by searching for images posted by other users on the social platform, by searching, or in other manners.
After the first image is obtained, step 203 and step 205 can be executed to directly obtain the main tone of the first image.
In step 203, the following operations are performed simultaneously for each first pixel: and acquiring the color value of the mixed reference pixel points according to the distance between the reference pixel points and the first pixel points, the color value of the reference pixel points and the color value of the first pixel points.
The reference pixel point is any pixel point in the first image.
When each first pixel point is processed, any pixel point in the first image is used as a reference pixel point, the distance between the reference pixel point and the first pixel point is determined according to the coordinate of the reference pixel point and the coordinate of the first pixel point, and then the color value of the reference pixel point and the color value of the first pixel point are mixed according to the obtained distance, so that the color value after the reference pixel point is mixed is obtained.
Optionally, the following operations are performed simultaneously for each first pixel: according to the distance between the reference pixel point and the first pixel point, the color value of the first pixel point and the color value of the reference pixel point are respectively mixed, the mixed color value of the first pixel point is used as the mixed color value of the first pixel point relative to the reference pixel point, and the sum of the mixed color values of each first pixel point relative to the reference pixel point is used as the mixed color value of the reference pixel point.
The mixed color value of each first pixel point relative to the reference pixel point is obtained by adopting the following formula:
Figure BDA0002519129750000121
in addition, y is a mixed color value of each first pixel point relative to the reference pixel point, a is a color value of each first pixel point, k is a fixed numerical value, and x is a coordinate value of each first pixel point.
In addition, it should be noted that, in the process of performing interpolation processing on the image in step 203 in this embodiment of the application, the color value of each first pixel is interpolated to obtain the interpolated second image, and the color value of each second pixel in the second image is smoother, so that the accuracy of the subsequently extracted dominant hue can be improved.
When the first image is processed, each first pixel point in the first image can be processed at the same time, so that the processing time can be saved, and the processing efficiency can be improved.
Optionally, in step 202 and 203 in the present application, a GPU may be adopted to perform downsampling processing on a first preset number of image regions in parallel to obtain a first preset number of first pixel points, form the first preset number of first pixel points into a first image, and then obtain a color value of the mixed reference pixel points in parallel according to a distance between the reference pixel point and the first pixel point, a color value of the reference pixel point, and a color value of the first pixel point. The GPU is adopted to perform parallel processing, so as to achieve the effect of performing down-sampling on a plurality of image areas simultaneously in step 202, and obtain the effect of the color value of the mixed reference pixel points simultaneously in step 203.
In addition, if the resolution of the first image is M × N, and if the size of the convolution kernel used is C when the convolution kernel is used to process the image, the processing time complexity in the related art is O (M × N × C), whereas in step 203 provided in this embodiment of the present application, the processing time complexity is O (C) by simultaneously calculating the pixel points, which increases M × N times compared with the steps in the related art, thereby increasing the efficiency of processing the image.
In step 204, the first image is processed according to the color value of each mixed first pixel point in the first image, and a second image is generated.
After the mixed color value of each first pixel point in the first image is obtained, the first image can be continuously processed, the color value of each first pixel point in the first image is determined as the mixed color value, and a second image corresponding to the first image is generated.
It should be noted that the present embodiment is described only by taking an example in which the first image is interpolated by using a gaussian function. In another embodiment, a linear interpolation method may also be adopted to perform interpolation processing on the first image to obtain a second image after the interpolation processing.
In step 205, a color value of at least one pixel point is extracted from the second image, and the color value of the at least one pixel point is determined as a dominant hue of the first image.
The second image comprises a plurality of pixel points, and when the dominant hue of the first image is determined, the color value of at least one pixel point can be extracted from the second image to be used as the dominant hue of the first image.
In a possible implementation manner, the second image is divided into a second preset number of image regions, any pixel point is extracted from each image region in the second preset number of image regions, the second preset number of pixel points is obtained, color values of the second preset number of pixel points are extracted, and the color values are determined as the dominant hue of the first image. Wherein the second preset number of image areas are the same size.
The second image is divided into a second preset number of image areas in order to improve the accuracy of the determined dominant hue, the second preset number of image areas can represent the dominant hue of the corresponding area in the first image, any pixel point is extracted from each image area in the second preset number of image areas to obtain a second preset number of pixel points, the color value of the second preset number of pixel points is extracted, and the dominant hue of the first image is determined.
Optionally, when extracting the pixel points from a second preset number of image regions of the second image, extracting the center pixel point of each image region to obtain a second preset number of pixel points, extracting color values of the second preset number of pixel points, and determining the color values as the dominant hue of the first image.
Wherein the second preset number of image areas are the same size. In addition, the second preset number is set by the terminal, or set by the user, or may be set in other manners. For example, the second predetermined amount may be 5, 6, 7, or other values. The center pixel point is a pixel point located at the center of the image area.
For example, when the number of pixels included in the image region is 7, and the 7 pixels are arranged from top to bottom, the 4 th pixel is the central pixel of the image region.
In addition, the second image is divided into 5 image areas from top to bottom, the 5 image areas are the same in size, the central pixel point of each image area is determined according to the size of the 5 image areas, and the color corresponding to the color value of the determined 5 central pixel points is the dominant hue of the first image.
Optionally, a second size of the divided image areas is determined according to the size of the second image and a second preset number, and the image areas satisfying the second size are divided from the second image according to the second size.
In the process of dividing the second image, after the size of the second image and the second preset number are determined, the size of the second image may be divided equally according to the second preset number to obtain a second size, where the second size is the size of each image area divided from the second image, so as to divide the second image into a uniform number, and the sizes of each divided image area are the same, so that the second image may be divided according to the second size to obtain a second preset number of image areas.
In a possible implementation manner, the second preset number of image regions in the second image have corresponding image regions in the first image, so that the color value of each pixel point extracted from the second image is respectively used as the dominant hue of the corresponding region of the image region in which each pixel point is located in the first image.
In addition, by way of example, a method for extracting a dominant hue of an image provided in an embodiment of the present application is described. For example, as shown in fig. 5, step 201 and step 202 are executed to perform downsampling processing on the third image to obtain a first image, as shown in fig. 6, step 203 and step 204 are executed to perform interpolation processing on the first image to obtain a second image, as shown in fig. 7, step 205 is executed to extract 5 pixel points from the second image, and color values of the 5 pixel points are the dominant hue of the first image.
In the embodiment of the present application, if the first image is an image obtained by performing down-sampling processing on the third image, the determined dominant color tone of the first image may be used as the dominant color tone of the third image. In another embodiment, if step 201 and 202 are not performed, the main tone of the first image is directly obtained.
Optionally, in step 205 in this embodiment of the application, the pixel value of at least one pixel point may be extracted from the second image at the same time.
Optionally, in step 205 in this embodiment of the application, the color value of at least one pixel point may be extracted in the second image in parallel by using the GPU, and then a color corresponding to the color value of at least one pixel point may be used as the dominant hue of the first image.
The method provided by the embodiment of the application obtains the first image, the first image comprises a plurality of first pixel points, and the following operations are simultaneously executed for each first pixel point: acquiring the color value of any reference pixel point in the first image after mixing according to the distance between the reference pixel point and the first pixel point, the color value of the reference pixel point and the color value of the first pixel point, the efficiency of processing the image can be improved, the first image is processed according to the color value of each mixed first pixel point in the first image to generate a second image, the color value of each pixel point in the second image is more uniform, the color value of at least one pixel point is extracted from the second image, the color value of at least one pixel point is determined as the dominant hue of the first image, the color values of the pixel points of the first image are mixed, the extracted dominant hue can be more consistent with the characteristics of the first image, the accuracy of dominant hue extraction is improved, the processing time is saved, and the extraction efficiency of the dominant hue is improved.
Moreover, by obtaining the average value of the color values of the pixels in each image area, the color values of the first pixels corresponding to the determined image areas are more uniform, and the accuracy of the color values of the determined first pixels can be improved. And the first image is obtained by performing downsampling processing on the first preset number of image areas of the third image, so that the efficiency of extracting the dominant hue of the image can be improved, and the data volume of the processing is reduced.
Fig. 8 is a schematic structural diagram illustrating a dominant hue extraction apparatus according to an exemplary embodiment. Referring to fig. 8, the apparatus includes:
an image obtaining unit 801, configured to obtain a first image, where the first image includes a plurality of first pixel points;
a color value obtaining unit 802, configured to perform the following operations for each first pixel point simultaneously: acquiring a color value of the mixed reference pixel points according to the distance between the reference pixel points and the first pixel points, the color value of the reference pixel points and the color value of the first pixel points, wherein the reference pixel points are any pixel points in the first image;
the generating unit 803 is configured to process the first image according to the color value obtained by mixing each first pixel in the first image, and generate a second image;
the extracting unit 804 is configured to extract a color value of at least one pixel from the second image, and determine the color value of the at least one pixel as a dominant hue of the first image.
In one possible implementation, referring to fig. 9, the color value obtaining unit 802 includes:
a mixing subunit 8021, configured to perform the following operations for each first pixel point simultaneously: according to the distance between the reference pixel point and the first pixel point, respectively mixing the color value of the first pixel point and the color value of the reference pixel point, and taking the mixed color value of the first pixel point as the mixed color value of the first pixel point relative to the reference pixel point;
the determining subunit 8022 is configured to use a sum of mixed color values of each first pixel point with respect to the reference pixel point as a color value of the reference pixel point after mixing.
In another possible implementation, referring to fig. 9, the image acquisition unit 801, including,
a first dividing unit 8011 configured to divide the third image into a first preset number of image areas, each of which has the same size;
the processing subunit 8012 is configured to perform downsampling processing on the first preset number of image regions simultaneously to obtain a first preset number of first pixel points, and configure the first preset number of first pixel points into a first image.
In another possible implementation, referring to fig. 9, the processing subunit 8012 is configured to:
simultaneously, determining the color value of each image area according to the color value of the pixel point included in each image area, and taking the color value of the first pixel point corresponding to each image area as the color value;
and creating a first image containing a plurality of first pixel points according to the determined color values of the plurality of first pixel points.
In another possible implementation, the processing subunit 8012 is configured to:
simultaneously obtaining the average value of the color values of the pixel points in each image area;
and taking the average value of the color values of the pixel points in each image area as the color value of the first pixel point corresponding to each image area in the first image.
In another possible implementation, the first molecular scoring unit 8011 is configured to:
determining a first size of the divided image area according to the size of the third image and the first preset number;
an image area satisfying the first size is divided from the third image according to the first size.
In another possible implementation, referring to fig. 9, the extracting unit 804 includes:
a second dividing subunit 8041, configured to divide the second image into a second preset number of image areas, where the sizes of the second preset number of image areas are the same;
a pixel point extracting subunit 8042, configured to extract any pixel point from each image area in the second preset number of image areas, to obtain a second preset number of pixel points;
the color value extracting subunit 8043 is configured to extract color values of the second preset number of pixel points.
In another possible implementation, the second dividing subunit 8041 is configured to:
determining a second size of the divided image area according to the size of the second image and a second preset number;
and dividing the image area satisfying the second size from the second image according to the second size.
In another possible implementation manner, the second preset number of image areas have corresponding image areas in the first image, and the extracting unit 804 is configured to:
and respectively taking the color value of each pixel point extracted from the second image as the dominant hue of the corresponding region of the image region where each pixel point is located in the first image.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
FIG. 10 is a block diagram illustrating an electronic device, such as a terminal, in accordance with an exemplary embodiment. The terminal 1000 can be a portable mobile terminal such as: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. The terminal 800 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, etc.
In general, terminal 1000 can include: one or more processors 1001 and one or more memories 1002.
Processor 1001 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so forth. The processor 1001 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 1001 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also referred to as a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1001 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed by the display screen. In some embodiments, the processor 1001 may further include an AI (Artificial Intelligence) processor for processing a computing operation related to machine learning.
Memory 1002 may include one or more computer-readable storage media, which may be non-transitory. The memory 1002 may also include volatile memory or non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 1002 is used to store at least one instruction for being possessed by processor 1001 to implement the dominant hue extraction method provided by method embodiments herein.
In some embodiments, terminal 1000 can also optionally include: a peripheral interface 1003 and at least one peripheral. The processor 1001, memory 1002 and peripheral interface 1003 may be connected by a bus or signal line. Various peripheral devices may be connected to peripheral interface 1003 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1004, display screen 1005, camera assembly 1006, audio circuitry 10010, positioning assembly 1008, and power supply 1009.
The peripheral interface 1003 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 1001 and the memory 1002. In some embodiments, processor 1001, memory 1002, and peripheral interface 1003 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 1001, the memory 1002, and the peripheral interface 1003 may be implemented on separate chips or circuit boards, which are not limited by this embodiment.
The Radio Frequency circuit 1004 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 1004 communicates with communication networks and other communication devices via electromagnetic signals. The radio frequency circuit 1004 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 1004 comprises: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuit 1004 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 1004 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 1005 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 1005 is a touch display screen, the display screen 1005 also has the ability to capture touch signals on or over the surface of the display screen 1005. The touch signal may be input to the processor 1001 as a control signal for processing. At this point, the display screen 1005 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, display screen 1005 can be one, disposed on a front panel of terminal 1000; in other embodiments, display 1005 can be at least two, respectively disposed on different surfaces of terminal 1000 or in a folded design; in other embodiments, display 1005 can be a flexible display disposed on a curved surface or a folded surface of terminal 1000. Even more, the display screen 1005 may be arranged in a non-rectangular irregular figure, i.e., a shaped screen. The Display screen 1005 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), and the like.
The camera assembly 1006 is used to capture images or video. Optionally, the camera assembly 1006 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 1006 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuit 1007 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 1001 for processing or inputting the electric signals to the radio frequency circuit 1004 for realizing voice communication. For stereo sound collection or noise reduction purposes, multiple microphones can be provided, each at a different location of terminal 1000. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 1001 or the radio frequency circuit 1004 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuit 1007 may also include a headphone jack.
A Location component 1008 is employed to locate a current geographic Location of terminal 1000 for purposes of navigation or LBS (Location Based Service). The Positioning component 1008 may be a Positioning component based on the Global Positioning System (GPS) in the united states, the beidou System in china, the graves System in russia, or the galileo System in the european union.
Power supply 1009 is used to supply power to various components in terminal 1000. The power source 1009 may be alternating current, direct current, disposable batteries, or rechargeable batteries. When the power source 1009 includes a rechargeable battery, the rechargeable battery may support wired charging or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, terminal 1000 can also include one or more sensors 1011. The one or more sensors 1011 include, but are not limited to: acceleration sensor 1011, gyro sensor 1012, pressure sensor 1013, fingerprint sensor 1014, optical sensor 1015, and proximity sensor 1016.
Acceleration sensor 1011 can detect acceleration magnitudes on three coordinate axes of a coordinate system established with terminal 1000. For example, the acceleration sensor 1011 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 1001 may control the display screen 1005 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1011. The acceleration sensor 1011 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 1012 may detect a body direction and a rotation angle of the terminal 1000, and the gyro sensor 1012 and the acceleration sensor 1011 may cooperate to acquire a 3D motion of the user on the terminal 1000. From the data collected by the gyro sensor 1012, the processor 1001 may implement the following functions: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensor 1013 can be disposed on a side frame of terminal 1000 and/or underneath display screen 1005. When pressure sensor 1013 is disposed on a side frame of terminal 1000, a user's grip signal on terminal 1000 can be detected, and processor 1001 performs left-right hand recognition or shortcut operation according to the grip signal collected by pressure sensor 1013. When the pressure sensor 1013 is disposed at a lower layer of the display screen 1005, the processor 1001 controls the operability control on the UI interface according to the pressure operation of the user on the display screen 1005. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 1014 is used to collect a fingerprint of the user, and the processor 1001 identifies the user according to the fingerprint collected by the fingerprint sensor 1014, or the fingerprint sensor 1014 identifies the user according to the collected fingerprint. Upon identifying that the user's identity is a trusted identity, the processor 1001 authorizes the user to have relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying, and changing settings, etc. Fingerprint sensor 1014 can be disposed on the front, back, or side of terminal 1000. When a physical key or vendor Logo is provided on terminal 1000, fingerprint sensor 1014 can be integrated with the physical key or vendor Logo.
The optical sensor 1015 is used to collect the ambient light intensity. In one embodiment, the processor 1001 may control the display brightness of the display screen 1005 according to the ambient light intensity collected by the optical sensor 1015. Specifically, when the ambient light intensity is high, the display brightness of the display screen 1005 is increased; when the ambient light intensity is low, the display brightness of the display screen 1005 is turned down. In another embodiment, the processor 1001 may also dynamically adjust the shooting parameters of the camera assembly 1006 according to the intensity of the ambient light collected by the optical sensor 1015.
Proximity sensor 1016, also known as a distance sensor, is typically disposed on a front panel of terminal 1000. Proximity sensor 1016 is used to gather the distance between the user and the front face of terminal 1000. In one embodiment, when proximity sensor 1016 detects that the distance between the user and the front surface of terminal 1000 is gradually reduced, processor 1001 controls display screen 1005 to switch from a bright screen state to a dark screen state; when proximity sensor 1016 detects that the distance between the user and the front of terminal 1000 is gradually increased, display screen 1005 is controlled by processor 1001 to switch from a breath-screen state to a bright-screen state.
Those skilled in the art will appreciate that the configuration shown in FIG. 10 is not intended to be limiting and that terminal 1000 can include more or fewer components than shown, or some components can be combined, or a different arrangement of components can be employed.
In an exemplary embodiment, there is also provided a non-transitory computer readable storage medium, wherein instructions in the storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the steps performed by a terminal or a server in the above-described keytone extraction method.
In an exemplary embodiment, there is also provided a computer program product, wherein instructions of the computer program product, when executed by a processor of an electronic device, enable the electronic device to perform the steps performed by the terminal or the server in the above-mentioned dominant hue extraction method.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. A method of dominant hue extraction, characterized in that the method comprises:
acquiring a first image, wherein the first image comprises a plurality of first pixel points;
simultaneously executing the following operations for each first pixel point: acquiring a color value of the mixed reference pixel points according to the distance between the reference pixel points and the first pixel points, the color value of the reference pixel points and the color value of the first pixel points, wherein the reference pixel points are any pixel points in the first image;
processing the first image according to the color value of each mixed first pixel point in the first image to generate a second image;
and extracting the color value of at least one pixel point from the second image, and determining the color value of the at least one pixel point as the dominant hue of the first image.
2. The method of claim 1, wherein the following is performed simultaneously for each first pixel: obtaining the color value of the mixed reference pixel points according to the distance between the reference pixel points and the first pixel points, the color value of the reference pixel points and the color value of the first pixel points, wherein the reference pixel points are any pixel point in the first image, and the method comprises the following steps:
simultaneously executing the following operations for each first pixel point: according to the distance between the reference pixel point and the first pixel point, respectively mixing the color value of the first pixel point and the color value of the reference pixel point, and taking the mixed color value of the first pixel point as the mixed color value of the first pixel point relative to the reference pixel point;
and taking the sum of the mixed color values of each first pixel point relative to the reference pixel point as the mixed color value of the reference pixel point.
3. The method of claim 1, wherein said acquiring a first image comprises,
dividing the third image into a first preset number of image areas, wherein the size of each image area is the same;
and simultaneously carrying out down-sampling treatment on the image areas with the first preset number to obtain first pixel points with the first preset number, and forming the first image by the first pixel points with the first preset number.
4. The method according to claim 3, wherein the simultaneously performing down-sampling processing on the first preset number of image regions to obtain the first preset number of first pixel points, and configuring the first preset number of first pixel points into the first image includes:
determining the color value of each image area according to the color value of the pixel point included in each image area, and taking the color value of the first pixel point corresponding to each image area as the color value;
and creating a first image containing the plurality of first pixel points according to the determined color values of the plurality of first pixel points.
5. The method according to claim 4, wherein the determining the color value of each image region according to the color value of the pixel point included in each image region at the same time as the color value of the first pixel point corresponding to each image region comprises:
simultaneously obtaining the average value of the color values of the pixel points in each image area;
and taking the average value of the color values of the pixel points in each image area as the color value of the first pixel point corresponding to each image area in the first image.
6. The method of claim 1, wherein the extracting a color value of at least one pixel point from the second image comprises:
dividing the second image into a second preset number of image areas, wherein the sizes of the second preset number of image areas are the same;
extracting any pixel point from each image area in the second preset number of image areas to obtain the second preset number of pixel points;
and extracting the color values of the second preset number of pixel points.
7. The method of claim 6, wherein the second predetermined number of image regions have corresponding image regions in the first image, and wherein determining the color value of the at least one pixel point as the dominant hue of the first image comprises:
and respectively taking the color value of each pixel point extracted from the second image as the dominant hue of the corresponding region of the image region where each pixel point is located in the first image.
8. A dominant hue extraction apparatus, characterized by comprising:
the image acquisition unit is used for acquiring a first image, and the first image comprises a plurality of first pixel points;
a color value obtaining unit, configured to perform the following operations for each first pixel point simultaneously: acquiring a color value of the mixed reference pixel points according to the distance between the reference pixel points and the first pixel points, the color value of the reference pixel points and the color value of the first pixel points, wherein the reference pixel points are any pixel points in the first image;
the generating unit is used for processing the first image according to the color value of each mixed first pixel point in the first image to generate a second image;
and the extraction unit is used for extracting the color value of at least one pixel point from the second image and determining the color value of the at least one pixel point as the dominant hue of the first image.
9. An electronic device, characterized in that the electronic device comprises:
one or more processors;
volatile or non-volatile memory for storing the one or more processor-executable commands;
wherein the one or more processors are configured to perform the dominant hue extraction method of any one of claims 1-7.
10. A non-transitory computer readable storage medium, wherein instructions in the storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the dominant hue extraction method of any one of claims 1-7.
CN202010485775.8A 2020-06-01 2020-06-01 Dominant hue extraction method, device, electronic equipment and storage medium Active CN113763486B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010485775.8A CN113763486B (en) 2020-06-01 2020-06-01 Dominant hue extraction method, device, electronic equipment and storage medium
PCT/CN2020/127558 WO2021243955A1 (en) 2020-06-01 2020-11-09 Dominant hue extraction method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010485775.8A CN113763486B (en) 2020-06-01 2020-06-01 Dominant hue extraction method, device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113763486A true CN113763486A (en) 2021-12-07
CN113763486B CN113763486B (en) 2024-03-01

Family

ID=78782666

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010485775.8A Active CN113763486B (en) 2020-06-01 2020-06-01 Dominant hue extraction method, device, electronic equipment and storage medium

Country Status (2)

Country Link
CN (1) CN113763486B (en)
WO (1) WO2021243955A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015039567A1 (en) * 2013-09-17 2015-03-26 Tencent Technology (Shenzhen) Company Limited Method and user apparatus for window coloring
CN105989799A (en) * 2015-02-12 2016-10-05 西安诺瓦电子科技有限公司 Image processing method and image processing device
CN106780634A (en) * 2016-12-27 2017-05-31 努比亚技术有限公司 Picture dominant tone extracting method and device
CN106898026A (en) * 2017-03-15 2017-06-27 腾讯科技(深圳)有限公司 The dominant hue extracting method and device of a kind of picture
CN110825968A (en) * 2019-11-04 2020-02-21 腾讯科技(深圳)有限公司 Information pushing method and device, storage medium and computer equipment

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100378351B1 (en) * 2000-11-13 2003-03-29 삼성전자주식회사 Method and apparatus for measuring color-texture distance, and method and apparatus for sectioning image into a plurality of regions using the measured color-texture distance
CN1977542B (en) * 2004-06-30 2010-09-29 皇家飞利浦电子股份有限公司 Dominant color extraction using perceptual rules to produce ambient light derived from video content
CN102523367B (en) * 2011-12-29 2016-06-15 全时云商务服务股份有限公司 Real time imaging based on many palettes compresses and method of reducing
EP2806401A1 (en) * 2013-05-23 2014-11-26 Thomson Licensing Method and device for processing a picture
CN103761303B (en) * 2014-01-22 2017-09-15 广东欧珀移动通信有限公司 The arrangement display methods and device of a kind of picture
CN109472832B (en) * 2018-10-15 2020-10-30 广东智媒云图科技股份有限公司 Color scheme generation method and device and intelligent robot

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015039567A1 (en) * 2013-09-17 2015-03-26 Tencent Technology (Shenzhen) Company Limited Method and user apparatus for window coloring
CN105989799A (en) * 2015-02-12 2016-10-05 西安诺瓦电子科技有限公司 Image processing method and image processing device
CN106780634A (en) * 2016-12-27 2017-05-31 努比亚技术有限公司 Picture dominant tone extracting method and device
CN106898026A (en) * 2017-03-15 2017-06-27 腾讯科技(深圳)有限公司 The dominant hue extracting method and device of a kind of picture
CN110825968A (en) * 2019-11-04 2020-02-21 腾讯科技(深圳)有限公司 Information pushing method and device, storage medium and computer equipment

Also Published As

Publication number Publication date
WO2021243955A1 (en) 2021-12-09
CN113763486B (en) 2024-03-01

Similar Documents

Publication Publication Date Title
WO2021008456A1 (en) Image processing method and apparatus, electronic device, and storage medium
CN109829864B (en) Image processing method, device, equipment and storage medium
CN111127509B (en) Target tracking method, apparatus and computer readable storage medium
CN111753784A (en) Video special effect processing method and device, terminal and storage medium
CN112581358B (en) Training method of image processing model, image processing method and device
US11386586B2 (en) Method and electronic device for adding virtual item
CN110225390B (en) Video preview method, device, terminal and computer readable storage medium
CN111754386B (en) Image area shielding method, device, equipment and storage medium
CN112565806B (en) Virtual gift giving method, device, computer equipment and medium
CN112667835A (en) Work processing method and device, electronic equipment and storage medium
CN110839174A (en) Image processing method and device, computer equipment and storage medium
CN111447389A (en) Video generation method, device, terminal and storage medium
CN111083526A (en) Video transition method and device, computer equipment and storage medium
CN110503159B (en) Character recognition method, device, equipment and medium
CN112396076A (en) License plate image generation method and device and computer storage medium
CN110992268B (en) Background setting method, device, terminal and storage medium
CN112235650A (en) Video processing method, device, terminal and storage medium
CN113592874B (en) Image display method, device and computer equipment
CN111369434B (en) Method, device, equipment and storage medium for generating spliced video covers
CN112990424B (en) Neural network model training method and device
CN111757146B (en) Method, system and storage medium for video splicing
CN111723615B (en) Method and device for judging matching of detected objects in detected object image
CN108881739B (en) Image generation method, device, terminal and storage medium
CN112399080A (en) Video processing method, device, terminal and computer readable storage medium
CN108881715B (en) Starting method and device of shooting mode, terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant