CN111768469B - Image clustering-based data visual color matching extraction method - Google Patents

Image clustering-based data visual color matching extraction method Download PDF

Info

Publication number
CN111768469B
CN111768469B CN202010784746.1A CN202010784746A CN111768469B CN 111768469 B CN111768469 B CN 111768469B CN 202010784746 A CN202010784746 A CN 202010784746A CN 111768469 B CN111768469 B CN 111768469B
Authority
CN
China
Prior art keywords
frame
color
video
clustering
picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010784746.1A
Other languages
Chinese (zh)
Other versions
CN111768469A (en
Inventor
李春芳
石民勇
***
王楷翔
王丹煜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Communication University of China
Original Assignee
Communication University of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Communication University of China filed Critical Communication University of China
Publication of CN111768469A publication Critical patent/CN111768469A/en
Application granted granted Critical
Publication of CN111768469B publication Critical patent/CN111768469B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/40Filling a planar surface by adding surface attributes, e.g. colour or texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/49Segmenting video sequences, i.e. computational techniques such as parsing or cutting the sequence, low-level clustering or determining units such as shots or scenes

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Computing Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an extraction method of data visualization color matching based on image clustering, which comprises the following steps: inputting a video or a picture; dividing the video through a sliding window with adjustable size to obtain a plurality of shot video frames; extracting one or more key frames of each cut video frame through a color histogram frame difference method and a clustering algorithm; clustering the HSV color values of pixels in the extracted key frame pictures one by one to obtain a color scheme of each key frame picture; or clustering the HSV color values of the pixels in the input picture to obtain a color scheme of the picture; and outputting a color scheme, and applying the current color scheme to the data visualization color scheme. The invention has low time complexity, can be used for picture extraction and color matching or video extraction and color matching, and actually provides a color matching method along with the change of video playing time sequence based on video color matching.

Description

Image clustering-based data visual color matching extraction method
Technical Field
The invention relates to the technical field of image processing of video content, relates to an extraction method of data visualization color matching based on image clustering, and in particular relates to a method for extracting data visualization color matching from video colors based on image clustering.
Background
When designing a data chart, the colors are the most important elements for forming the appearance of the whole chart, and the chart can be more professional and beautiful by selecting the correct and reasonable colors; meanwhile, among all visual channels involved in data visualization, color is the most effective visual coding and communication of data as the visual first perception factor.
The color matching of data visualization has been a challenge for programmers, as most data visualization programmers are in the field of science, lacking in basic literacy of design thinking and color matching. Programmers rely more on random number colors, color schemes provided by d3.Js, and color schemes provided by echorts to design visual works. However, in terms of color, the visual interface of each individual person appears very tedious, and almost from the color, it is possible to directly determine which visual API is used.
Although there are many data visualization APIs available today, the most popular d3.Js visualization library on GitHub and the echartis visualization library provided by national hundred degrees are most widely used in the field of visualization.
The ver3.0 version of the d3.Js provides a color scheme capable of outputting a set of 10 colors, three sets of 20 colors, and if the number of graphic elements of the visualized individual element SVG exceeds the number of color values in the selected color scheme, it selects the colors in a modulo manner, i.e., when the color scheme of 10 colors is selected, the 1 st graphic element and the 11 th graphic element have the same color.
The hundred-degree Echarts visual API designs a color matching scheme according to aesthetic factors, color collocation, visual mapping, metaphors, visual channels and the like, the Echarts provides an easy-to-use configuration type visual program implementation method, the provided color matching subjective feeling is slightly depressed, and the color matching subjective feeling of a refreshing system is ice-cold and shallow. For programmers, another alternative color scheme is a random number generated color value, and the color matching has a plurality of color matching changes, can not be limited by the number of colors of the color plate, namely the colors are not repeated, but no design sense of color matching exists, and is a non-inequality choice. Echarts, while providing three theme switches in the official network, only gives a set of default color values, the remaining two do not give specific color values, only provides references, and Echarts default color filling of the chart only supports distinguishing between different data services, elements within the same services will default to one color, that is, if we draw a bar graph with data written within the same services, echarts default to one color. The code within the option is modified by itself and a specific set of colors is given if the colors need to be customized for each rectangle.
In addition, programmers often use random number color schemes for variation, which are characterized by non-repeating colors, but completely random color scheme schemes, being hard and simple, and not having aesthetic communication.
The colors of a plurality of film and television drama images bring comfortable and harmonious feeling to people, and assist the theme performance of the film and television works, because the color matching is modified by artistic design and color matching by color matching operators, and typical color design film and television works such as Zhang Yibai's movie from all over the world' Zhang Yimou's ink and water style movie' shadow 'and' happy attack 'in the positive television drama' delay 'show remarkable color matching worker's design. The color of the famous painting is like the sunflower series, the flower and bird set series of famous painters Zhang Daqian, the lotus series of modern painters, the color free and natural, and the creator is the high hand of the color matching; therefore, colors of various film videos, famous pictures and photographic works are used for matching colors of the data visual works, the matching colors finely selected by an artist are extracted through an algorithm and mapped onto the visual works, and the problems of flexibility, diversity and artistry of the matching colors are solved. The invention realizes the extraction of colors from the video by using an algorithm, provides a process from video to color matching by a programmer without depending on a special data visualization colorist or an artist, and applies the process to the data visualization works. As a special case of video, an image is used to extract a color scheme by pixel-level clustering. The following describes the color matching extraction implementation process with video as an example.
Disclosure of Invention
In order to solve the above problems in the prior art, the present invention provides an extraction method of data visualization color matching based on image clustering, which extracts colors from video through an algorithm, and the method is independent of professional data visualization color matching operators or artists, and a programmer completes the process from video to color matching and applies the process to data visualization works.
The invention discloses an extraction method of data visualization color matching based on image clustering, which comprises the following steps:
inputting a video or a picture;
Dividing the video through a sliding window with adjustable size to obtain a plurality of shot video frames;
Extracting one or more key frames of each cut video frame through a color histogram frame difference method and a clustering algorithm;
Clustering the HSV color values of pixels in the extracted key frame pictures one by one to obtain a color scheme of each key frame picture; or clustering the HSV color values of the pixels in the input picture to obtain a color scheme of the picture;
And outputting a color scheme, and applying the current color scheme to the data visualization color scheme.
As a further improvement of the present invention, the video is divided by a sliding window with adjustable size to obtain a plurality of shot video frames; comprising the following steps:
Defining a current frame C, a frame set F of a distance to be calculated, presetting a window size n, wherein the average distance between the current frame C and the frame set F in a sliding window is avr, and the average distance between a previous frame C 'and the frame set F' in the sliding window is preavr;
S21, reading a current frame C, and judging whether F is empty or not; if the frame is empty, adding the current frame C into F, avr=0, and jumping to S25; if not, jumping to S22;
S22, calculating cosine distances of color histogram vectors of the current frame C and all frames in the frame set F, taking an arithmetic average avr, and jumping to S23;
S23, judging whether avr > preavr, preavr >0 and avr >0.1 are simultaneously established, and if so, jumping to S27; if not, jumping to S24; wherein avr > preavr x 9 represents a sudden change in inter-frame continuity, preavr >0 represents that the previous frame is not the start of a new shot, avr >0.1 represents that the start of a new shot should have a sufficiently large difference from each frame of the previous shot;
s24, judging whether the number of frames in the frame set F is smaller than n, if so, adding the current frame C into the frame set F, and jumping to S25; if n is greater than or equal to n, jumping to S26;
s25, not the start of a new shot, the window is not full, and the next frame is processed: preavr =avr, jump to S21;
S26, not starting a new lens, and filling up a window: the sliding window moves backwards by one frame according to the time sequence, and because the calculation set is in a disordered state, in order to reduce the calculation amount, one frame with the minimum frame number in the frame set is replaced by the current frame C, and the step S21 is skipped;
S27, starting a new lens: the current frame C is marked as the start of a new shot, F is cleared, preavr =0, and the process goes to S21.
As a further improvement of the present invention, the preset window size n is the frame rate of the video, and is 24, 25 or 30.
As a further improvement of the invention, extracting one or more key frames of each cut video frame through a color histogram frame difference method and a clustering algorithm; comprising the following steps:
S31, dividing the frame set m equally, and taking each part of first frames as an initial clustering center;
S32, for each frame in the frame set, respectively calculating the distances to m clustering centers, adding the distances to the cluster with the smallest distance to the clustering centers, updating the clustering centers, and jumping to S33;
S33, judging whether a termination condition is reached, wherein the termination condition is whether the iteration number reaches the maximum iteration number or the clustering center is not changed; if the termination condition is reached, jumping to S34; if the termination condition is not reached, repeating S32;
s34, respectively selecting frames closest to m clustering centers from the frame set to the end of the algorithm as key frames to output.
As a further improvement of the invention, the HSV color values of pixels in the pictures are clustered to obtain a color scheme of each key frame picture; comprising the following steps:
S41, taking k pixel points randomly from a picture as an initial clustering center, wherein k is the number of colors required by data visualization color matching;
S42, calculating the distance between each pixel point in the picture and k clustering centers, adding the distance into a cluster of the closest clustering center, updating the clustering center, and jumping to S43;
S43, judging whether a termination condition is reached, wherein the termination condition is whether the iteration number reaches the maximum iteration number or whether the specified precision is reached; if the termination condition is reached, jumping to S44; if the termination condition is not reached, repeating S42;
And S44, taking the color of the pixel point closest to each clustering center after the algorithm is finished as color matching output, namely outputting the color value of the pixel contained in the picture.
As a further improvement of the present invention, the output color scheme applies the current color scheme to the data visualization color scheme; comprising the following steps:
outputting a series of color schemes of the video, displaying the color schemes in real time according to video playing, and applying the current color scheme to data visualization color matching; or alternatively, the first and second heat exchangers may be,
And outputting a color scheme of the picture, applying the current color scheme to data visualization color matching, and reversely filling the data visualization color matching into the original picture according to pixels.
Compared with the prior art, the invention has the beneficial effects that:
The invention provides a time sequence-changing visual color scheme, the number of colors in the output color scheme is randomly designated through the number of clusters, and the color matching of the video or the color matching of the image is extracted according to the preference of a programmer to express aesthetic emotion, cold tone, warm tone or certain color preference, so that the flexible and changeable color matching, free selection and multiple series combination of visual works are realized.
Drawings
FIG. 1 is a flow chart of an extraction method of visual color matching of data based on image clustering, disclosed by an embodiment of the invention;
FIG. 2 is a flow chart of the sliding window split video shot of FIG. 1;
FIG. 3 is a flow chart of extracting key frames of each shot from the picture-level cluster of FIG. 1;
FIG. 4 is a flow chart of a color scheme generated by extracting key frame pictures from the pixel-level clusters of FIG. 1;
FIG. 5 is an exemplary diagram of a sliding window partitioned video shot of the present invention;
fig. 6 shows the experimental result of the frame distance change in the shot segmentation process of the short video of spring and evening.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The invention is described in further detail below with reference to the attached drawing figures:
the invention provides a method for extracting color matching for data visualization from video colors based on image clustering, which mainly extracts a color matching scheme from film and television drama, provides thinking for color design and can be applied to the data visualization color matching; meanwhile, the image is taken as a special case of the video, and the color scheme is extracted through pixel-level clustering.
The key points of the extraction method of the invention when being implemented are as follows:
1. How to cut the video; the video shot segmentation is also called shot edge detection. As the name implies, a complete video is divided into short videos with different lengths from a few seconds to tens of seconds according to the shot. Taking a television play as an example, a normal definition television play is collected, the total frame number can reach nearly one hundred thousand at 45 minutes and 30 frame rate, and analysis of the whole video is a difficult task. Therefore, shot segmentation is performed, and the main purpose is to preprocess video.
2. How to select key frames from the shots; the key frame extraction aims at extracting a part of frames capable of representing video content from all frames of the video. This operation is similar to "abstract generation" in natural language processing. Because the video itself does not have a standard given key frame, the key frame extraction can be regarded as an unsupervised learning process, and a clustering algorithm with excellent effect and moderate operand is a proper choice.
3. Extracting a color matching scheme from the key frame and applying the color matching scheme to the color matching of the data visual work; the color image generally adopts RGB (red, green and blue) color or HSV (hue, saturation and brightness value) color mode, and the invention adopts HSV color calculation because the extraction effect of HSV on the color is better. All pixel points in the picture are divided into K classes according to colors, the color matching of the picture is formed by taking the pixel color closest to the center of each class, and the simplest K-means clustering can be adopted.
The method for video color extraction based on image clustering is used for data visualization color matching, a time sequence-changing visualization color scheme is provided, the number of colors in the output color matching scheme is randomly designated through the number of clusters, and the color matching of the video or the color matching of the image is extracted according to the preference of a data visualization programmer to express aesthetic emotion, cold color tone, warm color tone or certain color preference, so that flexible and changeable color matching, free selection and multiple series combination of visual works are realized.
The video color extraction method based on image clustering for data visualization color matching comprises three main processing procedures of shot segmentation, intra-shot key frame extraction and key frame picture color extraction. Because of the massive data of the video sequence frames, the invention performs shot segmentation first. Based on the fact that adjacent frames in the same shot are continuously changed and adjacent frames among different shots are discontinuously changed, a sliding window method capable of changing the size is designed to divide video shots. One or more key frames are taken for each shot, and one motion shot (push-pull panning) takes two frames with multiple key frames in successive frames of content and completely different head and tail. It is therefore necessary to extract a series of key frames representing the changes for which image-level clustering is designed to extract the key frames. For a key frame image, a pixel-level clustering extraction color scheme is designed.
To this end, as shown in fig. 1, the method for extracting color matching for data visualization from video colors based on image clustering of the present invention specifically includes:
S1, inputting a video or a picture; wherein,
Reading an input video frame by frame through videoCapture () of OpenCV-Python;
s2, dividing the video through a sliding window with adjustable size to obtain a plurality of shot video frames; wherein,
As shown in fig. 2, a video shot is split by a sliding window method with variable size, and the video shot is split according to continuous variation in the same shot and discontinuous variation among different shots; the method specifically comprises the following steps:
Defining a current frame C, a frame set F of a distance to be calculated, presetting a window size n (generally selecting a frame rate of video, 24, 25 or 30), wherein the average distance between the current frame C and the frame set F in a sliding window is avr, and the average distance between a previous frame C 'and the frame set F' in the sliding window is preavr;
S21, reading a current frame C, and judging whether F is empty or not; if the frame is empty, adding the current frame C into F, avr=0, and jumping to S25; if not, jumping to S22;
S22, calculating cosine distances of color histogram vectors of the current frame C and all frames in the frame set F, taking an arithmetic average avr, and jumping to S23;
S23, judging whether avr > preavr, preavr >0 and avr >0.1 (the way of calculating the distance in the algorithm is cosine distance) are simultaneously established, and if so, jumping to S27; if not, jumping to S24; wherein, its physical meaning is: avr > preavr x 9 indicates a sudden change in inter-frame continuity, preavr >0 indicates that the previous frame is not the start of a new shot (new shot start), avr >0.1 indicates that the new shot start should have a sufficiently large difference from each frame of the previous shot (a shot is unlikely to have only one frame);
s24, judging whether the number of frames in the frame set F is smaller than n, if so, adding the current frame C into the frame set F, and jumping to S25; if n is greater than or equal to n, jumping to S26;
s25, not the start of a new shot, the window is not full, and the next frame is processed: preavr =avr, jump to S21;
S26, not starting a new lens, and filling up a window: the sliding window moves backwards by one frame according to the time sequence, and because the calculation set is in a disordered state, in order to reduce the calculation amount, one frame with the minimum frame number in the frame set is replaced by the current frame C, and the step S21 is skipped;
S27, starting a new lens: the current frame C is marked as the start of a new shot, F is cleared, preavr =0, and the process goes to S21.
S3, picture level clustering: extracting one or more key frames of each cut video frame through a color histogram frame difference method and a K-means clustering algorithm; wherein,
As shown in fig. 3, in step S3, key frames of each shot after segmentation are extracted by using a clustering method according to a color histogram frame difference method. One or more key frames are taken for each shot, and a dynamic shot (push-pull panning tracking) takes a plurality of key frames which are continuous frames of content and completely different from the beginning to the end, so that a series of key frames representing changes need to be extracted, and the key frames of each shot are extracted by using a K-means clustering algorithm according to a color histogram frame difference method. Given a set { f1, f2, f3 …, fm } of frames, t frames without class labels are divided into m classes, after m frames are determined as initial clustering centers, according to the similarity between frames, the cosine distance of feature vectors between frames is determined, HSV color histogram is selected as the feature vector, the frame set is clustered into m class clusters, the centers of the class clusters are updated by the average value of the feature vectors of all frames in the class clusters each time, classification is carried out again, and thus an optimal solution is iteratively pursued, and finally a used clustering result is obtained after a certain iteration number or no change occurs to the clustering centers. Comprising the following steps:
s31, dividing the frame set m equally, and taking each part of first frame as an initial clustering center.
S32, for each frame in the frame set, respectively calculating the distances to m clustering centers, adding the distances to the cluster with the smallest distance to the clustering centers, updating the clustering centers, and turning to S33.
S33, judging whether a termination condition is reached, namely whether the iteration number reaches the maximum iteration number or the clustering center is not changed, if the termination condition is reached, executing the step S34, and if the termination condition is not reached, repeating the step S32.
S34, respectively selecting frames closest to m clustering centers from the frame set to the end of the algorithm as key frames to output.
S4, carrying out K-means clustering on HSV color values of pixels in the extracted key frame pictures one by one to obtain a color scheme of each key frame picture; or carrying out K-means clustering on HSV color values of pixels in the input picture to obtain a color scheme of the picture; wherein,
As shown in fig. 4, the color matching extraction of the picture can be regarded as an unsupervised classification process, wherein all pixel points in the picture are classified into K classes according to colors, and K-Means clustering is adopted. The sub-process calls three libraries of Python, reads a specific frame from a video by using Opencv, realizes a clustering process by using sklearn libraries, draws a color matching scheme obtained by clustering by using matplotlib libraries to perform preliminary clustering judgment, transmits the extracted color matching scheme to JavaScript, draws a visual graph by using D3.JS, and renders colors; the specific treatment method comprises the following steps:
S41, taking k pixel points randomly from a picture as an initial clustering center, wherein k is the number of colors required by data visualization color matching;
S42, calculating the distance between each pixel point in the picture and k clustering centers, adding the distance into a cluster of the closest clustering center, updating the clustering center, and jumping to S43;
s43, judging whether a termination condition is reached, wherein the termination condition is whether the iteration number reaches the maximum iteration number or whether the specified precision is reached; if the termination condition is reached, jumping to S44; if the termination condition is not reached, repeating S42;
S44, taking the color of the pixel point closest to each clustering center after the algorithm is finished as color matching output, namely outputting the color value of the pixel contained in the picture instead of the color value of the clustering center pixel, and preventing taking the color which is not in the picture.
S5, a group of color matching schemes corresponding to the video key frames can be extracted from the video, so that a user can select color matching, or the color matching schemes with time sequence change can be directly provided, the number of the colors of the data visualization can be adjusted through the number of the clustering centers, and the flexibility is high;
S6, reversely filling the color scheme represented by a group of color values generated by a picture or a key frame picture into the original picture according to pixels, and verifying the consistency of subjective approximation of the picture generated by color filling and the original picture.
In the embodiment, in S2 above: as shown in fig. 5 and 6, in order to graphically illustrate a shot segmentation method of a sliding window, fig. 5 shows an execution process of the sliding window in the case of two shots, wherein 1-5 frames are one shot, 6-10 frames are another shot, and a preset window size n=3, i.e. 3 frames are in the window. The average distance between the current frame and the 3 frames before the current frame is calculated, and whether the frames have a very large continuous mutation or not is judged, namely, whether the frames are the start of a new shot, and a frame set F in a window is also emptied at the start of the new shot. Fig. 6 shows the change of the frame distance avr in the first 6 shots of the video of late spring and late spring of 2018, wherein 160 frames are taken as a whole, a shot switching point is arranged at 1,29,56,108,134,153, the value of avr in the figure better identifies the shot switching position, the lower part of the figure is an image sequence of all frames, each frame is compressed into an image with the width of 10 pixels, the image content in each shot is obviously continuous, and the jump of the image content of the adjacent shots is discontinuous.
In addition, the first set of "yanxi attack" is selected as the experimental object, 648 shots are detected in the video of 45 minutes, the result output by the algorithm is the number of each shot and the serial number of the start frame of the shot, the segmentation result of the first eight shots of the first set of "yanxi attack" (the four frames of the video from the 1 st frame are all black pictures, so the video is ignored as one shot), and all 8 shots are correctly segmented through manual detection. Because the video itself is a whole, the existing open source shot segmentation software cannot be completely accurate through experiments, and the formats of output results are not uniform, so that the shot segmentation results are selected to be evaluated manually, and whether the obtained starting frame of each shot and the last frame of each shot belong to different shots is judged. Through manual inspection, 18 out of 648 shots have misjudgments, but for the data visualization color matching problem, such misjudgments can be tolerated. The sub-process has the following 3 adjustable parameters: the window size, the ratio threshold of preavr to avr, the vector distance calculation mode, and these parameters need to be matched differently to obtain the best effect for different types of videos with different quality. The picture can also be further lifted in combination with, for example, blocking the picture.
Table 1 example of the first 8 shots of the first set of "Yanxi attack strategy
The TV drama of Yanxi attack is selected as a material for shot segmentation, so that the accuracy of the algorithm to the most common editing mode is tested, and the editing mode of the TV drama is more conventional, the shot length is more average, and the influence of the director's personal editing style is basically avoided. A method with a good segmentation effect on such conventional video cannot fully adapt to various styles of video clips. To obtain a more robust shot segmentation method, which is suitable for various editing styles, specific analysis or parameter adjustment is required for various special cases.
Another challenge of shot segmentation algorithms is movies that incorporate the director's personal style, with more unusual ways of editing like long shots, fast clips, fast pushes and pulls, and roll, etc. Such as fast cuts commonly used by martial arts and action movies, long shots commonly used by literature pieces, fast cut combinations commonly used by Jiang Wen director, long shots commonly used by Hou Xiaoxian director, and the like, require parameter adjustment or seek better adaptive shot segmentation methods.
In the embodiment, in S3 above:
Taking the first set beginning of the "yanxi attack strategy" as an example, the data set to be clustered is a shot frame set segmented by the last shot segmentation algorithm, the average length of each shot is about 4 seconds, 120 frames are taken, key frame extraction is carried out on each shot, and the number of extracted key frames is dynamically set according to the length of each shot. The video length is 1 minute and 49 seconds, 28 shots total, 68 key frames are extracted.
In the embodiment, in S5 above:
Taking the first set of the "Yanxi attack strategy" as an example, the video with the frame rate of 30 and the duration of 45 minutes and the total 81790 frames is divided into 648 shots, 1714 key frames are extracted, 1714 time sequence color schemes corresponding to the 1714 key frames are generated, and color matching with the time sequence change of the video is provided for rendering to the visual work.
Taking short video 'domestic Xingwang' as an example, the frame rate is 24, the duration is 1 minute and 28 seconds, and the extracted color is used for color matching of the visual work.
Taking the movie 'from your worldwide road' trailer as an example, the frame rate is 25, the duration is 2 minutes 16 seconds, and the extracted colors are used for color matching of the visual works.
In the embodiment, in S6 described above:
Taking a picture of Daqian Chinese painting as an example, extracting a color matching scheme to visualize a color matching result, and reversely rendering an original picture, and displaying the characteristics that the color matching is extracted through pixel clustering and the picture generated by reverse filling is approximately similar to the original picture in main aspect.
Taking a new Haicheng-style cartoon picture as an example, extracting a color matching scheme to be a visual color matching result, and reversely filling the picture, so that the characteristics approximately similar to the original picture can be seen.
In practice, the extracted color matching system is similar to that of a drawing similar to a wind, and the colors can be combined for mixed use to provide a color matching plate with more colors, such as a new Haicheng series of animation your name, weather son, and the like, and the colors extracted from the color matching plates can be combined for matching colors of the data visual works. The color matching of the same film and television works is the same style, and the color matching of different key frames in time sequence can be combined for use to generate color matching plates with more colors.
The above is only a preferred embodiment of the present invention, and is not intended to limit the present invention, but various modifications and variations can be made to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (5)

1. The extraction method of the visual data color matching based on the image clustering is characterized by comprising the following steps of:
inputting a video or a picture;
Dividing the video through a sliding window with adjustable size to obtain a plurality of shot video frames;
Extracting one or more key frames of each cut video frame through a color histogram frame difference method and a clustering algorithm;
Clustering the HSV color values of pixels in the extracted key frame pictures one by one to obtain a color scheme of each key frame picture; or clustering the HSV color values of the pixels in the input picture to obtain a color scheme of the picture;
Outputting a color scheme, and applying the current color scheme to data visualization color matching;
The video is segmented through a sliding window with adjustable size, so that a plurality of shot video frames are obtained; comprising the following steps:
Defining a current frame C, a frame set F of a distance to be calculated, presetting a window size n, wherein the average distance between the current frame C and the frame set F in a sliding window is avr, and the average distance between a previous frame C 'and the frame set F' in the sliding window is preavr;
S21, reading a current frame C, and judging whether F is empty or not; if the frame is empty, adding the current frame C into F, avr=0, and jumping to S25; if not, jumping to S22;
S22, calculating cosine distances of color histogram vectors of the current frame C and all frames in the frame set F, taking an arithmetic average avr, and jumping to S23;
S23, judging whether avr > preavr, preavr >0 and avr >0.1 are simultaneously established, and if so, jumping to S27; if not, jumping to S24; wherein avr > preavr x 9 represents a sudden change in inter-frame continuity, preavr >0 represents that the previous frame is not the start of a new shot, avr >0.1 represents that the start of a new shot should have a sufficiently large difference from each frame of the previous shot;
s24, judging whether the number of frames in the frame set F is smaller than n, if so, adding the current frame C into the frame set F, and jumping to S25; if n is greater than or equal to n, jumping to S26;
s25, not the start of a new shot, the window is not full, and the next frame is processed: preavr =avr, jump to S21;
S26, not starting a new lens, and filling up a window: the sliding window moves backwards by one frame according to the time sequence, and because the calculation set is in a disordered state, in order to reduce the calculation amount, one frame with the minimum frame number in the frame set is replaced by the current frame C, and the step S21 is skipped;
S27, starting a new lens: the current frame C is marked as the start of a new shot, F is cleared, preavr =0, and the process goes to S21.
2. The extraction method of claim 1, wherein the predetermined window size n is a frame rate of the video, and is 24, 25 or 30.
3. The extraction method according to claim 1, wherein the one or more key frames of each shot video frame after segmentation are extracted by a color histogram frame difference method and a clustering algorithm; comprising the following steps:
S31, dividing the frame set m equally, and taking each part of first frames as an initial clustering center;
S32, for each frame in the frame set, respectively calculating the distances to m clustering centers, adding the distances to the cluster with the smallest distance to the clustering centers, updating the clustering centers, and jumping to S33;
S33, judging whether a termination condition is reached, wherein the termination condition is whether the iteration number reaches the maximum iteration number or the clustering center is not changed; if the termination condition is reached, jumping to S34; if the termination condition is not reached, repeating S32;
s34, respectively selecting frames closest to m clustering centers from the frame set to the end of the algorithm as key frames to output.
4. The extraction method of claim 1, wherein the HSV color values of pixels in the pictures are clustered to obtain a color scheme for each key frame picture; comprising the following steps:
S41, taking k pixel points randomly from a picture as an initial clustering center, wherein k is the number of colors required by data visualization color matching;
S42, calculating the distance between each pixel point in the picture and k clustering centers, adding the distance into a cluster of the closest clustering center, updating the clustering center, and jumping to S43;
S43, judging whether a termination condition is reached, wherein the termination condition is whether the iteration number reaches the maximum iteration number or whether the specified precision is reached; if the termination condition is reached, jumping to S44; if the termination condition is not reached, repeating S42;
And S44, taking the color of the pixel point closest to each clustering center after the algorithm is finished as color matching output, namely outputting the color value of the pixel contained in the picture.
5. The extraction method of claim 1, wherein the output color scheme applies a current color scheme to the data visualization color scheme; comprising the following steps:
outputting a series of color schemes of the video, displaying the color schemes in real time according to video playing, and applying the current color scheme to data visualization color matching; or alternatively, the first and second heat exchangers may be,
And outputting a color scheme of the picture, applying the current color scheme to data visualization color matching, and reversely filling the data visualization color matching into the original picture according to pixels.
CN202010784746.1A 2019-11-13 2020-08-06 Image clustering-based data visual color matching extraction method Active CN111768469B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911106049 2019-11-13
CN2019111060494 2019-11-13

Publications (2)

Publication Number Publication Date
CN111768469A CN111768469A (en) 2020-10-13
CN111768469B true CN111768469B (en) 2024-05-28

Family

ID=72729328

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010784746.1A Active CN111768469B (en) 2019-11-13 2020-08-06 Image clustering-based data visual color matching extraction method

Country Status (1)

Country Link
CN (1) CN111768469B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112579823B (en) * 2020-12-28 2022-06-24 山东师范大学 Video abstract generation method and system based on feature fusion and incremental sliding window
CN115022675B (en) * 2022-07-01 2023-12-15 天翼数字生活科技有限公司 Video playing detection method and system
CN117474817B (en) * 2023-12-26 2024-03-15 江苏奥斯汀光电科技股份有限公司 Method for content unification of composite continuous images

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6701010B1 (en) * 1998-02-06 2004-03-02 Fujitsu Limited Color image processing apparatus and pattern extracting apparatus
CN103065153A (en) * 2012-12-17 2013-04-24 西南科技大学 Video key frame extraction method based on color quantization and clusters
CN104574307A (en) * 2014-12-30 2015-04-29 北京科技大学 Method for extracting primary colors of painting work image
CN107220585A (en) * 2017-03-31 2017-09-29 南京邮电大学 A kind of video key frame extracting method based on multiple features fusion clustering shots
CN107590419A (en) * 2016-07-07 2018-01-16 北京新岸线网络技术有限公司 Camera lens extraction method of key frame and device in video analysis
CN108846869A (en) * 2018-05-24 2018-11-20 浙江传媒学院 A kind of clothes Automatic color matching method based on natural image color

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6701010B1 (en) * 1998-02-06 2004-03-02 Fujitsu Limited Color image processing apparatus and pattern extracting apparatus
CN103065153A (en) * 2012-12-17 2013-04-24 西南科技大学 Video key frame extraction method based on color quantization and clusters
CN104574307A (en) * 2014-12-30 2015-04-29 北京科技大学 Method for extracting primary colors of painting work image
CN107590419A (en) * 2016-07-07 2018-01-16 北京新岸线网络技术有限公司 Camera lens extraction method of key frame and device in video analysis
CN107220585A (en) * 2017-03-31 2017-09-29 南京邮电大学 A kind of video key frame extracting method based on multiple features fusion clustering shots
CN108846869A (en) * 2018-05-24 2018-11-20 浙江传媒学院 A kind of clothes Automatic color matching method based on natural image color

Also Published As

Publication number Publication date
CN111768469A (en) 2020-10-13

Similar Documents

Publication Publication Date Title
CN111768469B (en) Image clustering-based data visual color matching extraction method
Li et al. Learning to enhance low-light image via zero-reference deep curve estimation
US9892324B1 (en) Actor/person centric auto thumbnail
Du et al. Saliency-guided color-to-gray conversion using region-based optimization
Deselaers et al. Pan, zoom, scan—time-coherent, trained automatic video cropping
Pierre et al. Luminance-chrominance model for image colorization
Aksoy et al. Interactive high-quality green-screen keying via color unmixing
US20080043041A2 (en) Image Blending System, Method and Video Generation System
CN104700442A (en) Image processing method and system for automatic filter and character adding
US20210272238A1 (en) Method to generate additional level of detail when zooming in on an image
CN106530309A (en) Video matting method and system based on mobile platform
CN113301408A (en) Video data processing method and device, electronic equipment and readable storage medium
CN113068034A (en) Video encoding method and device, encoder, equipment and storage medium
Gao et al. Real-time deep image retouching based on learnt semantics dependent global transforms
Wu et al. Color transfer with salient features mapping via attention maps between images
CN113411553A (en) Image processing method, image processing device, electronic equipment and storage medium
CN109788311B (en) Character replacement method, electronic device, and storage medium
CN115063800B (en) Text recognition method and electronic equipment
US20040130554A1 (en) Application of visual effects to a region of interest within an image
Redfern Colour palettes in US film trailers: a comparative analysis of movie barcode
CN111652792A (en) Image local processing method, image live broadcasting method, image local processing device, image live broadcasting equipment and storage medium
KR20030062586A (en) Human area detection for mobile video telecommunication system
CN105631812B (en) Control method and control device for color enhancement of display image
CN112488972A (en) Method and device for synthesizing green screen image and virtual image in real time
CN108833876B (en) A kind of stereoscopic image content recombination method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant