CN108010032A - Video landscape processing method and processing device based on the segmentation of adaptive tracing frame - Google Patents
Video landscape processing method and processing device based on the segmentation of adaptive tracing frame Download PDFInfo
- Publication number
- CN108010032A CN108010032A CN201711420316.6A CN201711420316A CN108010032A CN 108010032 A CN108010032 A CN 108010032A CN 201711420316 A CN201711420316 A CN 201711420316A CN 108010032 A CN108010032 A CN 108010032A
- Authority
- CN
- China
- Prior art keywords
- field pictures
- processing
- landscape
- foreground image
- field
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Studio Circuits (AREA)
Abstract
The invention discloses a kind of video landscape processing method based on the segmentation of adaptive tracing frame, device, computing device and computer-readable storage medium, this method to include:According to tracking box corresponding with t two field pictures, scene cut processing is carried out to the subregion of t two field pictures, obtains segmentation result corresponding with t two field pictures;According to segmentation result, true second foreground image;Landscape content effect textures are drawn, and landscape content effect textures and the second foreground image are subjected to fusion treatment, the t two field pictures after being handled;Video data after t two field pictures covering t two field pictures after processing are handled;Video data after display processing.Segmentation result of the program based on two field picture can quickly, accurately draw corresponding landscape content effect textures, add landscaping effect for the landscape region of two field picture according to landscape content effect textures, effectively beautified the display effect of video data.
Description
Technical field
The present invention relates to technical field of image processing, and in particular to a kind of video landscape based on the segmentation of adaptive tracing frame
Processing method, device, computing device and computer-readable storage medium.
Background technology
People often carry out the shooting or recording of video in life, such as when travelling outdoors, many people like
Some videos in relation to local landscape are shot to be used as souvenir.It is taken yet with reasons such as weather or seasonal variations
The landscape such as sky, meadow, trees may be not beautiful enough, can not meet the needs of users, and user wishes to beautify the landscape in video,
So that landscape seems more beautiful.In the prior art can be by improving brightness, adjusting the modes such as illumination to the frame figure in video
As being handled, the beautification to landscape is realized, but this processing mode is relatively simple, can not be only to the landscape institute in two field picture
Landscaping treatment is carried out in region, the display effect of the video data after obtained processing can not also meet that user is handled well
Demand.
The content of the invention
In view of the above problems, it is proposed that the present invention overcomes the above problem in order to provide one kind or solves at least in part
State video landscape processing method, device, computing device and the computer-readable storage medium based on the segmentation of adaptive tracing frame of problem.
A kind of according to an aspect of the invention, there is provided video landscape processing side based on the segmentation of adaptive tracing frame
Method, this method is used for being handled in video every each group two field picture that n frames divide, for one of which two field picture,
This method includes:
Obtain in a framing image and include the t two field pictures of landscape object and tracking corresponding with t-1 two field pictures
Frame, wherein t are more than 1;Tracking box corresponding with the 1st two field picture is according to determined by segmentation result corresponding with the 1st two field picture;
According to t two field pictures, a pair tracking box corresponding with t-1 two field pictures is adjusted processing, obtains and t two field pictures
Corresponding tracking box;According to tracking box corresponding with t two field pictures, the subregion of t two field pictures is carried out at scene cut
Reason, obtains segmentation result corresponding with t two field pictures;
According to segmentation result corresponding with t two field pictures, the second foreground image of t two field pictures is determined;
Landscape content effect textures corresponding with the subject content of the second foreground image are drawn, and landscape content effect is pasted
Figure carries out fusion treatment, the t two field pictures after being handled with the second foreground image;
Video data after t two field pictures covering t two field pictures after processing are handled;
Video data after display processing.
Further, landscape content effect textures corresponding with the subject content of the second foreground image are drawn further to wrap
Include:
Second foreground image is identified, determines the subject categories of the second foreground image;
The key message of landscape object is extracted from the second foreground image;
According to the subject categories and key message of the second foreground image, draw corresponding with the subject content of the second foreground image
Landscape content effect textures.
Further, according to the subject categories and key message of the second foreground image, the master with the second foreground image is drawn
The corresponding landscape content effect textures of topic content further comprise:
According to the subject categories of the second foreground image, basic landscape corresponding with the subject content of the second foreground image is searched
Content effect textures;
According to key message, processing and/or rotation processing are zoomed in and out to basic landscape content effect textures, obtain landscape
Content effect textures.
Further, landscape content effect textures and the second foreground image are subjected to fusion treatment, the t after being handled
Two field picture further comprises:
According to key message, fusion positional information corresponding with landscape content effect textures is determined;
According to fusion positional information, landscape content effect textures and the second foreground image are subjected to fusion treatment, are obtained everywhere
T two field pictures after reason.
Further, after the t two field pictures after being handled, this method further includes:
Tone processing, photo-irradiation treatment and/or brightness processed are carried out to the t two field pictures after processing.
Further, according to t two field pictures, it is further that a pair tracking box corresponding with t-1 two field pictures is adjusted processing
Including:
Processing is identified to t two field pictures, determines to be directed to the first foreground image of landscape object in t two field pictures;
Tracking box corresponding with t-1 two field pictures is applied to t two field pictures;
The first foreground image in t two field pictures, a pair tracking box corresponding with t-1 two field pictures are adjusted place
Reason.
Further, the first foreground image in t two field pictures, pair tracking box corresponding with t-1 two field pictures into
Row adjustment processing further comprises:
The pixel for belonging to the first foreground image in t two field pictures is calculated in tracking box corresponding with t-1 two field pictures
Shared ratio in all pixels point, by the first foreground pixel ratio that ratio-dependent is t two field pictures;
The second foreground pixel ratio of t-1 two field pictures is obtained, wherein, the second foreground pixel ratio of t-1 two field pictures
To belong to the pixel of the first foreground image all pixels in tracking box corresponding with t-1 two field pictures in t-1 two field pictures
Shared ratio in point;
Calculate the difference between the first foreground pixel ratio of t two field pictures and the second prospect ratio of t-1 two field pictures
Value;
Judge whether difference value exceedes default discrepancy threshold;It is pair corresponding with t-1 two field pictures if so, then according to difference value
The size of tracking box be adjusted processing.
Further, the first foreground image in t two field pictures, pair tracking box corresponding with t-1 two field pictures into
Row adjustment processing further comprises:
Calculate each frame of the first foreground image distance tracking box corresponding with t-1 two field pictures in t two field pictures
Distance;
According to distance and pre-determined distance threshold value, the size of pair tracking box corresponding with t-1 two field pictures is adjusted processing.
Further, the first foreground image in t two field pictures, pair tracking box corresponding with t-1 two field pictures into
Row adjustment processing further comprises:
The first foreground image in t two field pictures, determines the central point position of the first foreground image in t two field pictures
Put;
According to the center position of the first foreground image in t two field pictures, pair tracking box corresponding with t-1 two field pictures
Position be adjusted processing so that the in the center position of tracking box corresponding with t-1 two field pictures and t two field pictures
The center position of one foreground image overlaps.
Further, according to tracking box corresponding with t two field pictures, scene point is carried out to the subregion of t two field pictures
Processing is cut, segmentation result corresponding with t two field pictures is obtained and further comprises:
According to tracking box corresponding with t two field pictures, image to be split is extracted from the subregion of t two field pictures;
Treat segmentation figure picture and carry out scene cut processing, obtain segmentation result corresponding with image to be split;
According to segmentation result corresponding with image to be split, segmentation result corresponding with t two field pictures is obtained.
Further, according to tracking box corresponding with t two field pictures, extract and treat point from the subregion of t two field pictures
Image is cut to further comprise:
The image in tracking box corresponding with t two field pictures is extracted from t two field pictures, the image extracted is determined
For image to be split.
Further, treat segmentation figure picture and carry out scene cut processing, obtain segmentation result corresponding with image to be split
Further comprise:
Image to be split is inputted into scene cut network, obtains segmentation result corresponding with image to be split.
Further, the video data after display processing further comprises:By the video data real-time display after processing;
This method further includes:Video data after processing is uploaded to Cloud Server.
Further, the video data after processing is uploaded to Cloud Server to further comprise:
Video data after processing is uploaded to cloud video platform server, so that cloud video platform server is in cloud video
Platform is shown video data.
Further, the video data after processing is uploaded to Cloud Server to further comprise:
Video data after processing is uploaded to cloud direct broadcast server, so that cloud direct broadcast server pushes away video data in real time
Give viewing subscription client.
Further, the video data after processing is uploaded to Cloud Server to further comprise:
Video data after processing is uploaded to cloud public platform server, so that cloud public platform server pushes away video data
Give public platform concern client.
According to another aspect of the present invention, there is provided a kind of video landscape based on the segmentation of adaptive tracing frame handles dress
Put, which is used for being handled in video every each group two field picture that n frames divide, which includes:
Acquisition module, suitable for obtain a framing image in include landscape object t two field pictures and with t-1 frame figures
As corresponding tracking box, wherein t is more than 1;Tracking box corresponding with the 1st two field picture is according to segmentation corresponding with the 1st two field picture
As a result it is identified;
Split module, suitable for being adjusted processing according to t two field pictures, a pair tracking box corresponding with t-1 two field pictures, obtaining
To tracking box corresponding with t two field pictures;According to tracking box corresponding with t two field pictures, to the subregions of t two field pictures into
The processing of row scene cut, obtains segmentation result corresponding with t two field pictures;
Determining module, suitable for according to segmentation result corresponding with t two field pictures, determining the second foreground picture of t two field pictures
Picture;
Processing module, suitable for drawing landscape content effect textures corresponding with the subject content of the second foreground image, and will
Landscape content effect textures and the second foreground image carry out fusion treatment, the t two field pictures after being handled;
Overlay module, suitable for the t two field pictures after processing are covered the video data after t two field pictures are handled;
Display module, suitable for the video data after display processing.
Further, processing module is further adapted for:
Second foreground image is identified, determines the subject categories of the second foreground image;
The key message of landscape object is extracted from the second foreground image;
According to the subject categories and key message of the second foreground image, draw corresponding with the subject content of the second foreground image
Landscape content effect textures.
Further, processing module is further adapted for:
According to the subject categories of the second foreground image, basic landscape corresponding with the subject content of the second foreground image is searched
Content effect textures;
According to key message, processing and/or rotation processing are zoomed in and out to basic landscape content effect textures, obtain landscape
Content effect textures.
Further, processing module is further adapted for:
According to key message, fusion positional information corresponding with landscape content effect textures is determined;
According to fusion positional information, landscape content effect textures and the second foreground image are subjected to fusion treatment, are obtained everywhere
T two field pictures after reason.
Further, processing module is further adapted for:
Tone processing, photo-irradiation treatment and/or brightness processed are carried out to the t two field pictures after processing.
Further, segmentation module is further adapted for:
Processing is identified to t two field pictures, determines to be directed to the first foreground image of landscape object in t two field pictures;
Tracking box corresponding with t-1 two field pictures is applied to t two field pictures;
The first foreground image in t two field pictures, a pair tracking box corresponding with t-1 two field pictures are adjusted place
Reason.
Further, segmentation module is further adapted for:
The pixel for belonging to the first foreground image in t two field pictures is calculated in tracking box corresponding with t-1 two field pictures
Shared ratio in all pixels point, by the first foreground pixel ratio that ratio-dependent is t two field pictures;
The second foreground pixel ratio of t-1 two field pictures is obtained, wherein, the second foreground pixel ratio of t-1 two field pictures
To belong to the pixel of the first foreground image all pixels in tracking box corresponding with t-1 two field pictures in t-1 two field pictures
Shared ratio in point;
Calculate the difference between the first foreground pixel ratio of t two field pictures and the second prospect ratio of t-1 two field pictures
Value;
Judge whether difference value exceedes default discrepancy threshold;It is pair corresponding with t-1 two field pictures if so, then according to difference value
The size of tracking box be adjusted processing.
Further, segmentation module is further adapted for:
Calculate each frame of the first foreground image distance tracking box corresponding with t-1 two field pictures in t two field pictures
Distance;
According to distance and pre-determined distance threshold value, the size of pair tracking box corresponding with t-1 two field pictures is adjusted processing.
Further, segmentation module is further adapted for:
The first foreground image in t two field pictures, determines the central point position of the first foreground image in t two field pictures
Put;
According to the center position of the first foreground image in t two field pictures, pair tracking box corresponding with t-1 two field pictures
Position be adjusted processing so that the in the center position of tracking box corresponding with t-1 two field pictures and t two field pictures
The center position of one foreground image overlaps.
Further, segmentation module is further adapted for:
According to tracking box corresponding with t two field pictures, image to be split is extracted from the subregion of t two field pictures;
Treat segmentation figure picture and carry out scene cut processing, obtain segmentation result corresponding with image to be split;
According to segmentation result corresponding with image to be split, segmentation result corresponding with t two field pictures is obtained.
Further, segmentation module is further adapted for:
The image in tracking box corresponding with t two field pictures is extracted from t two field pictures, the image extracted is determined
For image to be split.
Further, segmentation module is further adapted for:
Image to be split is inputted into scene cut network, obtains segmentation result corresponding with image to be split.
Further, display module is further adapted for:By the video data real-time display after processing;
The device further includes:Uploading module, suitable for the video data after processing is uploaded to Cloud Server.
Further, uploading module is further adapted for:
Video data after processing is uploaded to cloud video platform server, so that cloud video platform server is in cloud video
Platform is shown video data.
Further, uploading module is further adapted for:
Video data after processing is uploaded to cloud direct broadcast server, so that cloud direct broadcast server pushes away video data in real time
Give viewing subscription client.
Further, uploading module is further adapted for:
Video data after processing is uploaded to cloud public platform server, so that cloud public platform server pushes away video data
Give public platform concern client.
According to another aspect of the invention, there is provided a kind of computing device, including:Processor, memory, communication interface and
Communication bus, processor, memory and communication interface complete mutual communication by communication bus;
Memory be used for store an at least executable instruction, executable instruction make processor perform it is above-mentioned based on adaptively with
The corresponding operation of video landscape processing method of track frame segmentation.
In accordance with a further aspect of the present invention, there is provided a kind of computer-readable storage medium, is stored with least one in storage medium
Executable instruction, executable instruction make processor perform such as the above-mentioned video landscape processing method based on the segmentation of adaptive tracing frame
Corresponding operation.
The technical solution provided according to the present invention, carries out scene cut, based on two field picture using tracking box to two field picture
Segmentation result can quickly, accurately draw corresponding landscape content effect textures, be frame figure according to landscape content effect textures
The landscape region addition landscaping effect of picture, has effectively beautified the display effect of video data, has optimized video data processing side
Formula, helps to meet user's process demand.
Described above is only the general introduction of technical solution of the present invention, in order to better understand the technological means of the present invention,
And can be practiced according to the content of specification, and in order to allow above and other objects of the present invention, feature and advantage can
Become apparent, below especially exemplified by the embodiment of the present invention.
Brief description of the drawings
By reading the detailed description of hereafter preferred embodiment, it is various other the advantages of and benefit it is common for this area
Technical staff will be clear understanding.Attached drawing is only used for showing the purpose of preferred embodiment, and is not considered as to the present invention
Limitation.And in whole attached drawing, identical component is denoted by the same reference numerals.In the accompanying drawings:
Fig. 1 shows the video landscape processing method according to an embodiment of the invention based on the segmentation of adaptive tracing frame
Flow diagram;
Fig. 2 shows the video landscape processing side in accordance with another embodiment of the present invention based on the segmentation of adaptive tracing frame
The flow diagram of method;
Fig. 3 shows the video landscape processing unit according to an embodiment of the invention based on the segmentation of adaptive tracing frame
Structure diagram;
Fig. 4 shows a kind of structure diagram of computing device according to embodiments of the present invention.
Embodiment
The exemplary embodiment of the disclosure is more fully described below with reference to accompanying drawings.Although the disclosure is shown in attached drawing
Exemplary embodiment, it being understood, however, that may be realized in various forms the disclosure without should be by embodiments set forth here
Limited.On the contrary, these embodiments are provided to facilitate a more thoroughly understanding of the present invention, and can be by the scope of the present disclosure
Completely it is communicated to those skilled in the art.
Fig. 1 shows the video landscape processing method according to an embodiment of the invention based on the segmentation of adaptive tracing frame
Flow diagram, this method is used for being handled in video every each group two field picture that n frames divide, as shown in Figure 1,
For one of which two field picture, this method comprises the following steps:
Step S100, obtain a framing image in include landscape object t two field pictures and with t-1 two field pictures pair
The tracking box answered.
During video capture or video record, image capture device due to the reason such as be moved, it is captured or
The quantity for the landscape object recorded may change, by taking landscape object is trees as an example, the captured or tree recorded
Wood quantity may increase or decrease, in order to it is quick, precisely in video two field picture carry out scene cut processing, this
Method in video every each group two field picture that n frames divide to handling.
Wherein, landscape object is included in two field picture, landscape object can be the objects such as sky, meadow, trees, mountain, wind
Scape object can also be the objects such as sea, lake, sandy beach.Those skilled in the art can set landscape object according to being actually needed
Put, do not limit herein.When needing to carry out scene cut to the t two field pictures in a framing image, wherein t is more than 1, in step
In rapid S100, t two field pictures and tracking box corresponding with t-1 two field pictures are obtained.
Foreground image can only include landscape object, and background image is the image in addition to foreground image in two field picture.For
Foreground image in foreground image in two field picture before dividing processing and the two field picture after dividing processing is subjected to area
Point, in the present invention, the foreground image in the two field picture before dividing processing is known as the first foreground image, will be in dividing processing
The foreground image in two field picture afterwards is known as the second foreground image.Similarly, by the Background in the two field picture before dividing processing
As referred to as the first background image, the background image in the two field picture after dividing processing is known as the second background image.
Wherein, tracking box corresponding with t-1 two field pictures can be completely by the first foreground picture frame in t-1 two field pictures
In being selected in.Specifically, tracking box corresponding with the 1st two field picture is according to determined by segmentation result corresponding with the 1st two field picture.
Tracking box can be rectangle frame, and the first foreground image in two field picture is selected for frame, realize in two field picture landscape object with
Track.
Step S101, according to t two field pictures, a pair tracking box corresponding with t-1 two field pictures is adjusted processing, obtain and
The corresponding tracking box of t two field pictures;According to tracking box corresponding with t two field pictures, field is carried out to the subregion of t two field pictures
Scape dividing processing, obtains segmentation result corresponding with t two field pictures.
Using tracking box to the first foreground image into line trace during, tracking box need according to each two field picture
Be adjusted, then for t two field pictures, can the size and location of pair tracking box corresponding with t-1 two field pictures be adjusted,
The tracking box after adjustment is enabled to be suitable for t two field pictures, so as to obtain tracking box corresponding with t two field pictures.Due to
In the first foreground picture frame in t two field pictures can be selected in by the corresponding tracking box of t two field pictures, thus can according to t
The corresponding tracking box of two field picture, carries out scene cut processing to the subregion of t two field pictures, obtains corresponding with t two field pictures
Segmentation result.For example, scene point can be carried out to the region that tracking box institute corresponding with t two field pictures frame in t two field pictures selects
Cut processing.Compared with carrying out scene cut processing to the full content of two field picture in the prior art, the present invention is only to two field picture
Subregion carries out scene cut processing, effectively reduces the data processing amount of image scene segmentation, improves treatment effeciency.
Step S102, according to segmentation result corresponding with t two field pictures, determines the second foreground image of t two field pictures.
It can be determined clearly out which pixel in t two field pictures belongs to according to segmentation result corresponding with t two field pictures
Second foreground image, which pixel belong to the second background image, so that it is determined that going out the second foreground image of t two field pictures.
Step S103, draws corresponding with the subject content of the second foreground image landscape content effect textures, and by landscape
Content effect textures and the second foreground image carry out fusion treatment, the t two field pictures after being handled.
After the second foreground image has been obtained, landscape content effect corresponding with the subject content of the second foreground image is drawn
Fruit textures.For example, when the content of the second foreground image is primarily with regard to mountain, then the subject content of the second foreground image is
Mountain, draws landscape content effect textures corresponding with mountain;When the content of the second foreground image is primarily with regard to sea, then the
The subject content of two foreground images is sea, draws landscape content effect textures corresponding with sea.Those skilled in the art can
Landscape content effect textures are set according to being actually needed, are not limited herein.Drafting obtained landscape content effect textures it
Afterwards, landscape content effect textures and the second foreground image are subjected to fusion treatment so that landscape content effect textures can truly,
Accurately it is merged with the landscape object in the second foreground image, so that the t two field pictures after being handled.
Step S104, the video data after the t two field pictures covering t two field pictures after processing are handled.
Original t two field pictures are directly override using the t two field pictures after processing, regarding after directly can be processed
Frequency evidence.Meanwhile the user of recording can also be immediately seen the t two field pictures after processing.
Step S105, the video data after display processing.
In the t two field pictures after being handled, the t two field pictures after processing can directly be covered to original t frame figures
Picture.Speed during covering, was generally completed within 1/24 second.For a user, since the time of covering treatment is opposite
Short, human eye is not discovered significantly, i.e., human eye does not perceive the process that the former t two field pictures in video data are capped.This
During video data of the sample after follow-up display processing, shoot and/or record equivalent to one side and/or during playing video data, one
Side real-time display does not feel as the display effect that two field picture in video data covers for the video data after processing, user
Fruit.
According to the video landscape processing method provided in this embodiment split based on adaptive tracing frame, tracking box pair is utilized
Two field picture carries out scene cut, and the segmentation result based on two field picture can quickly, accurately draw corresponding landscape content effect
Textures, add landscaping effect for the landscape region of two field picture according to landscape content effect textures, have effectively beautified video data
Display effect, optimize video data processing mode, help to meet user's process demand.
Fig. 2 shows the video landscape processing side in accordance with another embodiment of the present invention based on the segmentation of adaptive tracing frame
The flow diagram of method, this method is used for being handled in video every each group two field picture that n frames divide, such as Fig. 2 institutes
Show, for one of which two field picture, this method comprises the following steps:
Step S200, obtain a framing image in include landscape object t two field pictures and with t-1 two field pictures pair
The tracking box answered.
Wherein t is more than 1.For example, when t is 2, in step s 200, obtains in a framing image and include landscape object
The 2nd two field picture and tracking box corresponding with the 1st two field picture, specifically, tracking box corresponding with the 1st two field picture be according to
Determined by the corresponding segmentation result of 1st two field picture;When t is 3, in step s 200, obtains and include in a framing image
3rd two field picture of landscape object and tracking box corresponding with the 2nd two field picture, wherein, tracking box corresponding with the 2nd two field picture is
During scene cut processing is carried out to the 2nd two field picture, a pair tracking box corresponding with the 1st two field picture is adjusted to obtain
's.
Step S201, processing is identified to t two field pictures, is determined before being directed to the first of landscape object in t two field pictures
Scape image, is applied to t two field pictures, and the first prospect in t two field pictures by tracking box corresponding with t-1 two field pictures
Image, a pair tracking box corresponding with t-1 two field pictures are adjusted processing.
Specifically, using AE of the prior art (Adobe After Effects), NUKE (The Foundry
) etc. Nuke processing is identified to t two field pictures in image processing tool, may recognize which pixel belongs in t two field pictures
First foreground image, so that it is determined that obtaining being directed to the first foreground image of landscape object in t two field pictures.In definite first prospect
After image, tracking box corresponding with t-1 two field pictures can be arranged on t two field pictures, so as to according in t two field pictures
First foreground image is adjusted the tracking box, so as to obtain tracking box corresponding with t two field pictures.
Specifically, the pixel for belonging to the first foreground image in t two field pictures can be calculated corresponding with t-1 two field pictures
Ratio shared in all pixels point in tracking box, the first foreground pixel ratio by the ratio-dependent for t two field pictures, then
The second foreground pixel ratio of t-1 two field pictures is obtained, wherein, the second foreground pixel ratio of t-1 two field pictures is t-1 frames
It is shared in all pixels point in tracking box corresponding with t-1 two field pictures to belong to the pixel of the first foreground image in image
Ratio, then calculates the difference between the first foreground pixel ratio of t two field pictures and the second prospect ratio of t-1 two field pictures
Value, judges whether difference value exceedes default discrepancy threshold, if it is determined that obtaining difference value exceedes default discrepancy threshold, illustrate and the
The corresponding tracking box of t-1 two field pictures do not match that with the first foreground image in t two field pictures, then according to difference value, pair with the
The size of the corresponding tracking box of t-1 two field pictures is adjusted processing.If it is determined that difference value is obtained not less than default discrepancy threshold,
Then can not the size of pair tracking box corresponding with t-1 two field pictures be adjusted processing.Those skilled in the art can be according to reality
Need to be configured default discrepancy threshold, do not limit herein.
Assuming that will tracking box corresponding with t-1 two field pictures be applied to t two field pictures after, although with t-1 frame figures
In the first foreground picture frame in t two field pictures can be selected in completely as corresponding tracking box, but the first of t two field pictures
Difference value between foreground pixel ratio and the second prospect ratio of t-1 two field pictures has exceeded default discrepancy threshold, illustrates pair
The first foreground image in t two field pictures, tracking box corresponding with t-1 two field pictures may be larger or smaller, it is therefore desirable to
The size of pair tracking box corresponding with t-1 two field pictures is adjusted processing.For example, when the first foreground pixel of t two field pictures
Ratio is 0.9, and the second prospect ratio of t-1 two field pictures is 0.7, and the difference value between two ratios has exceeded default difference threshold
Value, then can adaptively be amplified the size of tracking box corresponding with t-1 two field pictures according to difference value;And for example, when
First foreground pixel ratio of t two field pictures is 0.5, and the second prospect ratio of t-1 two field pictures is 0.7, and between two ratios
Difference value exceeded default discrepancy threshold, then can be according to difference value by the size of tracking box corresponding with t-1 two field pictures
Adaptively reduced.
Alternatively, each of the first foreground image distance tracking box corresponding with t-1 two field pictures in t two field pictures is calculated
The distance of frame;According to calculated distance and pre-determined distance threshold value, the size of pair tracking box corresponding with t-1 two field pictures
It is adjusted processing.Those skilled in the art can be configured pre-determined distance threshold value according to being actually needed, and not limit herein.
For example, calculated distance is less than pre-determined distance threshold value, then can by the size of tracking box corresponding with t-1 two field pictures into
Row adaptively amplifies so that the first foreground image in t two field pictures meets pre- apart from the distance of each frame of the tracking box
If distance threshold;And for example, calculated distance is more than pre-determined distance threshold value, then can will it is corresponding with t-1 two field pictures with
The size of track frame is adaptively reduced so that each frame of the first foreground image in t two field pictures apart from the tracking box
Distance meet pre-determined distance threshold value.
In addition, also the first foreground image in t two field pictures can be determined according to the first foreground image in t two field pictures
Center position;It is pair corresponding with t-1 two field pictures according to the center position of the first foreground image in t two field pictures
The position of tracking box is adjusted processing, so that the center position of tracking box corresponding with t-1 two field pictures and t two field pictures
In the first foreground image center position overlap so that the first foreground image can be located at tracking box among.
Step S202, according to tracking box corresponding with t two field pictures, extracts from the subregion of t two field pictures and treats point
Cut image.
Specifically, the image in tracking box corresponding with t two field pictures can be extracted from t two field pictures, will be extracted
Image be determined as image to be split.Since tracking box corresponding with t two field pictures can be completely by first in t two field pictures
In foreground picture frame is selected in, then the pixel belonged in t two field pictures outside the tracking box belongs to the second background image,
Therefore after tracking box corresponding with t two field pictures has been obtained, can be extracted from t two field pictures corresponding with t two field pictures
Tracking box in image, and the image is determined as image to be split, scene cut only subsequently is carried out to the image to be split
Processing, effectively reduces the data processing amount of image scene segmentation, improves treatment effeciency.
Step S203, treats segmentation figure picture and carries out scene cut processing, obtain segmentation result corresponding with image to be split.
Since the first foreground picture frame in t two field pictures can be selected in by tracking box corresponding with t two field pictures completely
It is interior, then to can determine that category without carrying out scene cut processing to the pixel belonged in t two field pictures outside the tracking box
Pixel outside the tracking box belongs to the second background image, therefore only can carry out scene to the image to be split extracted
Dividing processing.
Wherein, when treating the progress scene cut processing of segmentation figure picture, deep learning method can be utilized.Deep learning is
It is a kind of based on the method that data are carried out with representative learning in machine learning.Observation (such as piece image) can use a variety of sides
Formula represents, such as vector of each pixel intensity value, or is more abstractively expressed as a series of sides, the region etc. of given shape.
And some specific method for expressing are used to be easier from example learning task.Treated point using the dividing method of deep learning
Cut image and carry out scene cut processing, obtain segmentation result corresponding with image to be split.Wherein, using deep learning method
Obtained scene cut network etc. treats segmentation figure picture and carries out scene cut processing, obtains segmentation knot corresponding with image to be split
Fruit, can determine which pixel belongs to the second foreground image in image to be split, which pixel category according to segmentation result
In the second background image.
Specifically, image to be split can be inputted into scene cut network, obtains segmentation corresponding with image to be split
As a result.Scene cut processing is carried out to the image inputted, it is necessary to figure for the ease of scene cut network in the prior art
The size of picture is adjusted, and is pre-set dimension by its size adjusting, such as pre-set dimension is 320 × 240 pixels, and ordinary circumstance
Under, the size of image is mostly 1280 × 720 pixels, it is therefore desirable to and it is first 320 × 240 pixels by its size adjusting, Ran Houzai
Scene cut processing is carried out to the image after size adjusting.But work as and the two field picture in video is carried out using scene cut network
During scene cut processing, if the first foreground image proportion in two field picture is smaller, for example the first foreground image is in two field picture
Middle proportion is 0.2, then according to the prior art there is still a need for the size of two field picture is turned down, then carries out scene to it again
Dividing processing, then when carrying out scene cut processing, then be easy to that the pixel of the second foreground image edge will be actually belonged to
Point is divided into the second background image, and the segmentation precision of obtained segmentation result is relatively low, segmentation effect is poor.
And the technical solution provided according to the present invention, it is corresponding with t two field pictures by what is extracted from t two field pictures
Image in tracking box is determined as image to be split, then treats that separate image carries out scene cut processing to this, when the first prospect
Image is when proportion is smaller in t two field pictures, and the size of the image to be split extracted also will far smaller than t frame figures
The size of picture, then the image to be split of pre-set dimension is adjusted to compared with the two field picture for being adjusted to pre-set dimension, can be more
Effectively retain foreground image information, therefore the segmentation precision higher of obtained segmentation result.
Step S204, according to segmentation result corresponding with image to be split, obtains segmentation knot corresponding with t two field pictures
Fruit.
Image to be split is the image in tracking box corresponding with t two field pictures, according to corresponding with image to be split point
Cutting result can be determined clearly which pixel in image to be split belongs to the second foreground image, which pixel belongs to second
Background image, and the pixel belonged in t two field pictures outside the tracking box belongs to the second background image, therefore can be square
Just segmentation result corresponding with t two field pictures rapidly, is obtained according to segmentation result corresponding with image to be split, so as to
Enough it is determined clearly which pixel in t two field pictures belongs to the second foreground image, which pixel belongs to the second background image.
Compared with carrying out scene cut processing to the full content of two field picture in the prior art, the present invention from two field picture only to extracting
Image to be split carry out scene cut processing, effectively reduce the data processing amount of image scene segmentation, improve processing
Efficiency.
Step S205, according to segmentation result corresponding with t two field pictures, determines the second foreground image of t two field pictures.
Step S206, is identified the second foreground image, determines the subject categories of the second foreground image.
Specifically, the second foreground image can be matched with preset themes classification image, obtains subject categories matching knot
Fruit, then according to subject categories matching result, determines the subject categories of the second foreground image.Those skilled in the art can be according to reality
Border needs to set subject categories, does not limit herein.Specifically, subject categories may include the different stone class of water Guanshan Mountain color classification, strange hole
Not, natural flow waterfall classification, sunlight seabeach classification, meteorology are built with weather classification, bio-landscape classification, historic site classification, modern times
Build classification, nationality amorous feedings classification and town and country scene classification etc..Wherein, it is that each subject categories are set in advance that preset themes classification image, which is,
The corresponding image put.For example, according to subject categories matching result, the second foreground image and the image of sunlight seabeach classification
Match, then the subject categories of the second foreground image are determined as sunlight seabeach classification.
In addition, in a specific embodiment, also using trained identification the second foreground image of Network Recognition
Subject categories.Since identification network is trained, so inputting the second foreground image into identification network, so that it may convenient
Ground obtains the subject categories of the second foreground image.
Step S207, extracts the key message of landscape object from the second foreground image.
Believe for convenience of landscape content effect textures are drawn, it is necessary to extract the crucial of landscape object from the second foreground image
Breath.The key message can be specially key point information, key area information, and/or key lines information etc..The implementation of the present invention
Example is illustrated so that key message is key point information as an example, but the key message of the present invention is not limited to key point information.Make
The processing speed and efficiency that landscape content effect textures are drawn according to key point information, Ke Yizhi can be improved with key point information
Connect and landscape content effect textures are drawn according to key point information, it is not necessary to it is multiple to carry out subsequently calculating, analysis etc. to key message again
Miscellaneous operation.Meanwhile key point information is easy to extraction, and extract accurately so that the effect for drawing landscape content effect textures is more smart
It is accurate.Specifically, the key point information of landscape target edges can be extracted from the second foreground image.
Step S208, according to the subject categories and key message of the second foreground image, draws the master with the second foreground image
Inscribe the corresponding landscape content effect textures of content.
, can be in pre-rendered many basic landscape in order to easily and quickly draw out landscape content effect textures
Hold effect textures, then when drawing landscape content effect textures corresponding with the subject content of the second foreground image, so that it may first
Corresponding basic landscape content effect textures are found, then basic landscape content effect textures are handled, so that rapidly
Obtain landscape content effect textures.Wherein, these basic landscape content effect textures may include the effect patch of different themes content
Figure, such as, it may include subject content is the effect textures on mountain, subject content is lake effect textures, subject content are seabeach
Effect textures etc..In addition, for the ease of managing these basic landscape content effect textures, an effect textures storehouse can be established,
By these bases, landscape content effect textures are stored into the effect textures storehouse according to subject categories, for example, being by subject content
The effect textures and subject content on mountain are both configured to water Guanshan Mountain color classification for the subject categories of the effect textures in lake, by theme
The effect textures and subject content that hold for sea are that the subject categories of the effect textures at sandy beach are both configured to sunlight seabeach classification.
Specifically, can be searched corresponding with the subject content of the second foreground image according to the subject categories of the second foreground image
Basic landscape content effect textures, then according to key message, basic landscape content effect textures are zoomed in and out processing and/
Or rotation processing, obtain landscape content effect textures.For example, the subject categories for working as the second foreground image are sunlight seabeach classification,
When the subject content of second foreground image is sandy beach, then subject categories first can be searched from effect textures storehouse is sunlight seabeach class
Other basis landscape content effect textures, then again from the basic landscape content effect textures that subject categories are sunlight seabeach classification
It is middle to search the basic landscape content effect textures that subject content is sandy beach, it is sandy beach to subject content then according to key message
Basic landscape content effect textures the processing such as zoom in and out, rotate, intercepting, it is more suited landscape object, so as to obtain
Landscape content effect textures corresponding with the subject content of the second foreground image.
Landscape content effect textures and the second foreground image are carried out fusion treatment, the t after being handled by step S209
Two field picture.
Specifically, according to key message, determine fusion positional information corresponding with landscape content effect textures, then according to
Positional information is merged, landscape content effect textures and the second foreground image are subjected to fusion treatment, the t frame figures after being handled
Picture.
Step S210, tone processing, photo-irradiation treatment and/or brightness processed are carried out to the t two field pictures after processing.
Due to containing landscape content effect textures in the t two field pictures after processing, to make the t two field pictures after processing
Display effect is more natural true, can carry out image procossing to the t two field pictures after processing.Image procossing can be included to processing
T two field pictures afterwards carry out tone processing, photo-irradiation treatment, brightness processed etc..For example the t two field pictures after processing are carried
High brightness processing, makes its overall effect more natural, beautiful.
Step S211, the video data after the t two field pictures covering t two field pictures after processing are handled.
Original t two field pictures are directly override using the t two field pictures after processing, regarding after directly can be processed
Frequency evidence.Meanwhile the user of recording can also be immediately seen the t two field pictures after processing.
Step S212, the video data after display processing.
After video data after being handled, it can be shown in real time, after user can directly be seen that processing
Video data display effect.
Step S213, Cloud Server is uploaded to by the video data after processing.
Video data after processing can be directly uploaded to Cloud Server, specifically, can be by the video counts after processing
According to be uploaded to one or more cloud video platform server, such as iqiyi.com, youku.com, fast video cloud video platform server,
So that cloud video platform server is shown video data in cloud video platform.Or can also be by the video data after processing
Cloud direct broadcast server is uploaded to, can be straight by cloud when the user for having live viewing end is watched into cloud direct broadcast server
Broadcast server and give video data real time propelling movement to viewing subscription client.Or the video data after processing can also be uploaded to
Cloud public platform server, when there is user to pay close attention to the public platform, public platform is pushed to by cloud public platform server by video data
Pay close attention to client;Further, cloud public platform server can also be accustomed to according to the viewing of the user of concern public platform, and push meets
The video data of user's custom pays close attention to client to public platform.
It is fast using tracking box according to the video landscape processing method provided in this embodiment split based on adaptive tracing frame
Speed, accurately obtain segmentation result corresponding with two field picture, is effectively improved segmentation precision and the processing of image scene segmentation
Efficiency;And according to by the obtained foreground image of segmentation result, accurately determine the subject categories of foreground image and extract wind
The key message of scape object, can rapidly draw landscape content effect textures so that landscape content effect textures can be with wind
Scape object is merged well, further increases video data display effect.
Fig. 3 shows the video landscape processing unit according to an embodiment of the invention based on the segmentation of adaptive tracing frame
Structure diagram, which is used for being handled in video every each group two field picture that n frames divide, as shown in figure 3, should
Device includes:Acquisition module 310, segmentation module 320, determining module 330, processing module 340, overlay module 350 and display mould
Block 360.
Acquisition module 310 is suitable for:Obtain the t two field pictures that include landscape object in a framing image and with t-1
The corresponding tracking box of two field picture.
Wherein t is more than 1;Tracking box corresponding with the 1st two field picture is true according to segmentation result corresponding with the 1st two field picture institute
Fixed.
Segmentation module 320 is suitable for:According to t two field pictures, a pair tracking box corresponding with t-1 two field pictures is adjusted place
Reason, obtains tracking box corresponding with t two field pictures;According to tracking box corresponding with t two field pictures, to the part of t two field pictures
Region carries out scene cut processing, obtains segmentation result corresponding with t two field pictures.
Alternatively, segmentation module 320 is further adapted for:Processing is identified to t two field pictures, is determined in t two field pictures
For the first foreground image of landscape object;Tracking box corresponding with t-1 two field pictures is applied to t two field pictures;According to t
The first foreground image in two field picture, a pair tracking box corresponding with t-1 two field pictures are adjusted processing.
Specifically, segmentation module 320 is further adapted for:Calculate the pixel for belonging to the first foreground image in t two field pictures
The ratio shared in all pixels point in tracking box corresponding with t-1 two field pictures, by ratio-dependent for t two field pictures the
One foreground pixel ratio;The second foreground pixel ratio of t-1 two field pictures is obtained, wherein, the second prospect picture of t-1 two field pictures
Plain ratio is belongs to the pixel of the first foreground image institute in tracking box corresponding with t-1 two field pictures in t-1 two field pictures
There is ratio shared in pixel;Calculate the first foreground pixel ratio of t two field pictures and the second prospect ratio of t-1 two field pictures
Difference value between example;Judge whether difference value exceedes default discrepancy threshold;If so, then according to difference value, pair with t-1 frame figures
As the size of corresponding tracking box is adjusted processing.
Segmentation module 320 is further adapted for:Calculate the first foreground image distance and the t-1 two field pictures in t two field pictures
The distance of each frame of corresponding tracking box;According to distance and pre-determined distance threshold value, pair tracking box corresponding with t-1 two field pictures
Size be adjusted processing.
Segmentation module 320 is further adapted for:The first foreground image in t two field pictures, determines in t two field pictures
The center position of first foreground image;According to the center position of the first foreground image in t two field pictures, pair with t-1
The position of the corresponding tracking box of two field picture is adjusted processing, so that the central point position of tracking box corresponding with t-1 two field pictures
Put and overlapped with the center position of the first foreground image in t two field pictures.
Alternatively, segmentation module 320 is further adapted for:According to tracking box corresponding with t two field pictures, from t two field pictures
Subregion extract image to be split;Treat segmentation figure picture and carry out scene cut processing, obtain corresponding with image to be split
Segmentation result;According to segmentation result corresponding with image to be split, segmentation result corresponding with t two field pictures is obtained.
Segmentation module 320 is further adapted for:Extracted from t two field pictures in tracking box corresponding with t two field pictures
Image, is determined as image to be split by the image extracted.
Segmentation module 320 is further adapted for:Image to be split is inputted into scene cut network, is obtained and figure to be split
As corresponding segmentation result.
Determining module 330 is suitable for:According to segmentation result corresponding with t two field pictures, the second prospect of t two field pictures is determined
Image.
Processing module 340 is suitable for:Landscape content effect textures corresponding with the subject content of the second foreground image are drawn, and
Landscape content effect textures and the second foreground image are subjected to fusion treatment, the t two field pictures after being handled.
Alternatively, processing module 340 is further adapted for:Second foreground image is identified, determines the second foreground image
Subject categories;The key message of landscape object is extracted from the second foreground image;According to the theme class of the second foreground image
Other and key message, draws landscape content effect textures corresponding with the subject content of the second foreground image.
Alternatively, processing module 340 is further adapted for:According to the subject categories of the second foreground image, search with second before
The corresponding basic landscape content effect textures of subject content of scape image;According to key message, basic landscape content effect is pasted
Figure zooms in and out processing and/or rotation processing, obtains landscape content effect textures.
Alternatively, processing module 340 is further adapted for:According to key message, determine corresponding with landscape content effect textures
Fusion positional information;According to fusion positional information, landscape content effect textures and the second foreground image are subjected to fusion treatment,
T two field pictures after being handled.
Alternatively, processing module 340 is further adapted for:Tone processing, photo-irradiation treatment are carried out to the t two field pictures after processing
And/or brightness processed.
Overlay module 350 is suitable for:Video data after t two field pictures covering t two field pictures after processing are handled.
Display module 360 is suitable for:Video data after display processing.
Display module 360 handled after video data after, it can be shown in real time, user can be direct
See the display effect of the video data after processing.
The device may also include:Uploading module 370, suitable for the video data after processing is uploaded to Cloud Server.
Video data after processing can be directly uploaded to Cloud Server by uploading module 370, specifically, uploading module
370 can be uploaded to the video data after processing the cloud video platform server of one or more, such as iqiyi.com, youku.com, fast
The cloud video platform server such as video, so that cloud video platform server is shown video data in cloud video platform.Or
Video data after processing can also be uploaded to cloud direct broadcast server by uploading module 370, when have it is live viewing end user into
When entering cloud direct broadcast server and being watched, can by cloud direct broadcast server by video data real time propelling movement to viewing user client
End.Or the video data after processing can also be uploaded to cloud public platform server by uploading module 370, it is somebody's turn to do when there is user's concern
During public platform, video data is pushed to public platform concern client by cloud public platform server;Further, cloud public platform service
Device can also be accustomed to according to the viewing of the user of concern public platform, and the video data that push meets user's custom is paid close attention to public platform
Client.
According to the video landscape processing unit provided in this embodiment split based on adaptive tracing frame, tracking box pair is utilized
Two field picture carries out scene cut, and the segmentation result based on two field picture can quickly, accurately draw corresponding landscape content effect
Textures, add landscaping effect for the landscape region of two field picture according to landscape content effect textures, have effectively beautified video data
Display effect, optimize video data processing mode, help to meet user's process demand.
Present invention also offers a kind of nonvolatile computer storage media, computer-readable storage medium is stored with least one can
Execute instruction, executable instruction can perform the video landscape based on the segmentation of adaptive tracing frame in above-mentioned any means embodiment
Processing method.
Fig. 4 shows a kind of structure diagram of computing device according to embodiments of the present invention, the specific embodiment of the invention
The specific implementation to computing device does not limit.
As shown in figure 4, the computing device can include:Processor (processor) 402, communication interface
(Communications Interface) 404, memory (memory) 406 and communication bus 408.
Wherein:
Processor 402, communication interface 404 and memory 406 complete mutual communication by communication bus 408.
Communication interface 404, for communicating with the network element of miscellaneous equipment such as client or other servers etc..
Processor 402, for executive program 410, can specifically perform the above-mentioned video based on the segmentation of adaptive tracing frame
Correlation step in landscape processing method embodiment.
Specifically, program 410 can include program code, which includes computer-managed instruction.
Processor 402 is probably central processor CPU, or specific integrated circuit ASIC (Application
Specific Integrated Circuit), or be arranged to implement the embodiment of the present invention one or more integrate electricity
Road.The one or more processors that computing device includes, can be same type of processors, such as one or more CPU;Also may be used
To be different types of processor, such as one or more CPU and one or more ASIC.
Memory 406, for storing program 410.Memory 406 may include high-speed RAM memory, it is also possible to further include
Nonvolatile memory (non-volatile memory), for example, at least a magnetic disk storage.
Program 410 specifically can be used for so that processor 402 perform in above-mentioned any means embodiment based on adaptive
The video landscape processing method of tracking box segmentation.The specific implementation of each step may refer to above-mentioned based on adaptive in program 410
Corresponding description in corresponding steps and unit in the video landscape Processing Example of tracking box segmentation, this will not be repeated here.It is affiliated
The technical staff in field can be understood that, for convenience and simplicity of description, the equipment of foregoing description and module it is specific
The course of work, may be referred to the corresponding process description in preceding method embodiment, details are not described herein.
Algorithm and display be not inherently related to any certain computer, virtual system or miscellaneous equipment provided herein.
Various general-purpose systems can also be used together with teaching based on this.As described above, required by constructing this kind of system
Structure be obvious.In addition, the present invention is not also directed to any certain programmed language.It should be understood that it can utilize various
Programming language realizes the content of invention described herein, and the description done above to language-specific is to disclose this hair
Bright preferred forms.
In the specification that this place provides, numerous specific details are set forth.It is to be appreciated, however, that the implementation of the present invention
Example can be put into practice in the case of these no details.In some instances, known method, structure is not been shown in detail
And technology, so as not to obscure the understanding of this description.
Similarly, it will be appreciated that in order to simplify the disclosure and help to understand one or more of each inventive aspect,
Above in the description to the exemplary embodiment of the present invention, each feature of the invention is grouped together into single implementation sometimes
In example, figure or descriptions thereof.However, the method for the disclosure should be construed to reflect following intention:I.e. required guarantor
The application claims of shield features more more than the feature being expressly recited in each claim.It is more precisely, such as following
Claims reflect as, inventive aspect is all features less than single embodiment disclosed above.Therefore,
Thus the claims for following embodiment are expressly incorporated in the embodiment, wherein each claim is in itself
Separate embodiments all as the present invention.
Those skilled in the art, which are appreciated that, to carry out adaptively the module in the equipment in embodiment
Change and they are arranged in one or more equipment different from the embodiment.Can be the module or list in embodiment
Member or component be combined into a module or unit or component, and can be divided into addition multiple submodule or subelement or
Sub-component.In addition at least some in such feature and/or process or unit exclude each other, it can use any
Combination is disclosed to all features disclosed in this specification (including adjoint claim, summary and attached drawing) and so to appoint
Where all processes or unit of method or equipment are combined.Unless expressly stated otherwise, this specification (including adjoint power
Profit requires, summary and attached drawing) disclosed in each feature can be by providing the alternative features of identical, equivalent or similar purpose come generation
Replace.
In addition, it will be appreciated by those of skill in the art that although some embodiments described herein include other embodiments
In included some features rather than further feature, but the combination of the feature of different embodiments means in of the invention
Within the scope of and form different embodiments.For example, in the following claims, embodiment claimed is appointed
One of meaning mode can use in any combination.
The all parts embodiment of the present invention can be with hardware realization, or to be run on one or more processor
Software module realize, or realized with combinations thereof.It will be understood by those of skill in the art that it can use in practice
Microprocessor or digital signal processor (DSP) are come one of some or all components in realizing according to embodiments of the present invention
A little or repertoire.The present invention is also implemented as setting for performing some or all of method as described herein
Standby or program of device (for example, computer program and computer program product).Such program for realizing the present invention can deposit
Storage on a computer-readable medium, or can have the form of one or more signal.Such signal can be from because of spy
Download and obtain on net website, either provide on carrier signal or provided in the form of any other.
It should be noted that the present invention will be described rather than limits the invention for above-described embodiment, and ability
Field technique personnel can design alternative embodiment without departing from the scope of the appended claims.In the claims,
Any reference symbol between bracket should not be configured to limitations on claims.Word "comprising" does not exclude the presence of not
Element or step listed in the claims.Word "a" or "an" before element does not exclude the presence of multiple such
Element.The present invention can be by means of including the hardware of some different elements and being come by means of properly programmed computer real
It is existing.In if the unit claim of equipment for drying is listed, several in these devices can be by same hardware branch
To embody.The use of word first, second, and third does not indicate that any order.These words can be explained and run after fame
Claim.
Claims (10)
1. it is a kind of based on adaptive tracing frame segmentation video landscape processing method, the method be used in video every n frames
Obtained each group two field picture is divided to be handled, for one of which two field picture, the described method includes:
Obtain in the framing image and include the t two field pictures of landscape object and tracking corresponding with t-1 two field pictures
Frame, wherein t are more than 1;Tracking box corresponding with the 1st two field picture is according to determined by segmentation result corresponding with the 1st two field picture;
According to t two field pictures, a pair tracking box corresponding with t-1 two field pictures is adjusted processing, obtains corresponding with t two field pictures
Tracking box;According to tracking box corresponding with t two field pictures, the subregion of the t two field pictures is carried out at scene cut
Reason, obtains segmentation result corresponding with t two field pictures;
According to segmentation result corresponding with t two field pictures, the second foreground image of t two field pictures is determined;
Landscape content effect textures corresponding with the subject content of second foreground image are drawn, and the landscape content is imitated
Fruit textures carry out fusion treatment, the t two field pictures after being handled with second foreground image;
T two field pictures after the processing are covered into the video data after the t two field pictures are handled;
Video data after display processing.
2. according to the method described in claim 1, wherein, the drafting is corresponding with the subject content of second foreground image
Landscape content effect textures further comprise:
Second foreground image is identified, determines the subject categories of second foreground image;
The key message of landscape object is extracted from second foreground image;
According to the subject categories of second foreground image and the key message, the theme with second foreground image is drawn
The corresponding landscape content effect textures of content.
3. method according to claim 1 or 2, wherein, the subject categories and institute according to second foreground image
Key message is stated, landscape content effect textures corresponding with the subject content of second foreground image is drawn and further comprises:
According to the subject categories of second foreground image, basis corresponding with the subject content of second foreground image is searched
Landscape content effect textures;
According to the key message, processing and/or rotation processing are zoomed in and out to the basic landscape content effect textures, obtained
Landscape content effect textures.
4. according to claim 1-3 any one of them methods, wherein, it is described by the landscape content effect textures and described the
Two foreground images carry out fusion treatment, and the t two field pictures after being handled further comprise:
According to the key message, fusion positional information corresponding with the landscape content effect textures is determined;
According to the fusion positional information, the landscape content effect textures are carried out merging place with second foreground image
Reason, the t two field pictures after being handled.
5. according to claim 1-4 any one of them methods, wherein, after the t two field pictures after being handled, the side
Method further includes:
Tone processing, photo-irradiation treatment and/or brightness processed are carried out to the t two field pictures after the processing.
6. according to claim 1-5 any one of them methods, wherein, the foundation t two field pictures, pair with t-1 two field pictures
Corresponding tracking box is adjusted processing and further comprises:
Processing is identified to t two field pictures, determines to be directed to the first foreground image of landscape object in t two field pictures;
Tracking box corresponding with t-1 two field pictures is applied to t two field pictures;
The first foreground image in t two field pictures, a pair tracking box corresponding with t-1 two field pictures are adjusted processing.
7. according to claim 1-6 any one of them methods, wherein, the first foreground image in the two field picture according to t,
Pair tracking box corresponding with t-1 two field pictures is adjusted processing and further comprises:
The pixel for belonging to the first foreground image in t two field pictures is calculated in tracking box corresponding with t-1 two field pictures to own
Shared ratio in pixel, by the first foreground pixel ratio that the ratio-dependent is t two field pictures;
The second foreground pixel ratio of t-1 two field pictures is obtained, wherein, the second foreground pixel ratio of t-1 two field pictures is the
Belong to the pixel of the first foreground image in t-1 two field pictures in tracking box corresponding with t-1 two field pictures in all pixels point
Shared ratio;
Calculate the difference value between the first foreground pixel ratio of t two field pictures and the second prospect ratio of t-1 two field pictures;
Judge whether the difference value exceedes default discrepancy threshold;If so, then according to the difference value, pair with t-1 two field pictures
The size of corresponding tracking box is adjusted processing.
8. it is a kind of based on adaptive tracing frame segmentation video landscape processing unit, described device be used in video every n frames
Divide obtained each group two field picture to be handled, described device includes:
Acquisition module, suitable for obtain include in a framing image landscape object t two field pictures and with t-1 frame figures
As corresponding tracking box, wherein t is more than 1;Tracking box corresponding with the 1st two field picture is according to segmentation corresponding with the 1st two field picture
As a result it is identified;
Split module, suitable for according to t two field pictures, a pair tracking box corresponding with t-1 two field pictures is adjusted processing, obtain and
The corresponding tracking box of t two field pictures;According to tracking box corresponding with t two field pictures, to the subregions of the t two field pictures into
The processing of row scene cut, obtains segmentation result corresponding with t two field pictures;
Determining module, suitable for according to segmentation result corresponding with t two field pictures, determining the second foreground image of t two field pictures;
Processing module, suitable for drawing landscape content effect textures corresponding with the subject content of second foreground image, and will
The landscape content effect textures carry out fusion treatment, the t two field pictures after being handled with second foreground image;
Overlay module, suitable for the t two field pictures after the processing are covered the video counts after the t two field pictures are handled
According to;
Display module, suitable for the video data after display processing.
9. a kind of computing device, including:Processor, memory, communication interface and communication bus, the processor, the storage
Device and the communication interface complete mutual communication by the communication bus;
The memory is used to store an at least executable instruction, and the executable instruction makes the processor perform right such as will
Ask the corresponding operation of video landscape processing method based on the segmentation of adaptive tracing frame any one of 1-7.
10. a kind of computer-readable storage medium, an at least executable instruction, the executable instruction are stored with the storage medium
Processor is set to perform the video landscape processing method based on the segmentation of adaptive tracing frame as any one of claim 1-7
Corresponding operation.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711420316.6A CN108010032A (en) | 2017-12-25 | 2017-12-25 | Video landscape processing method and processing device based on the segmentation of adaptive tracing frame |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711420316.6A CN108010032A (en) | 2017-12-25 | 2017-12-25 | Video landscape processing method and processing device based on the segmentation of adaptive tracing frame |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108010032A true CN108010032A (en) | 2018-05-08 |
Family
ID=62061133
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711420316.6A Pending CN108010032A (en) | 2017-12-25 | 2017-12-25 | Video landscape processing method and processing device based on the segmentation of adaptive tracing frame |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108010032A (en) |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1738426A (en) * | 2005-09-09 | 2006-02-22 | 南京大学 | Video motion goal division and track method |
CN1897015A (en) * | 2006-05-18 | 2007-01-17 | 王海燕 | Method and system for inspecting and tracting vehicle based on machine vision |
CN101290681A (en) * | 2008-05-26 | 2008-10-22 | 华为技术有限公司 | Video frequency object tracking method, device and automatic video frequency following system |
CN101394546A (en) * | 2007-09-17 | 2009-03-25 | 华为技术有限公司 | Video target profile tracing method and device |
CN102521879A (en) * | 2012-01-06 | 2012-06-27 | 肖华 | 2D (two-dimensional) to 3D (three-dimensional) method |
CN103325124A (en) * | 2012-03-21 | 2013-09-25 | 东北大学 | Target detecting and tracking system and method using background differencing method based on FPGA |
CN103607554A (en) * | 2013-10-21 | 2014-02-26 | 无锡易视腾科技有限公司 | Fully-automatic face seamless synthesis-based video synthesis method |
CN104657974A (en) * | 2013-11-25 | 2015-05-27 | 腾讯科技(上海)有限公司 | Image processing method and device |
CN105654508A (en) * | 2015-12-24 | 2016-06-08 | 武汉大学 | Monitoring video moving target tracking method based on self-adaptive background segmentation and system thereof |
CN105930833A (en) * | 2016-05-19 | 2016-09-07 | 重庆邮电大学 | Vehicle tracking and segmenting method based on video monitoring |
CN106462975A (en) * | 2014-05-28 | 2017-02-22 | 汤姆逊许可公司 | Method and apparatus for object tracking and segmentation via background tracking |
-
2017
- 2017-12-25 CN CN201711420316.6A patent/CN108010032A/en active Pending
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1738426A (en) * | 2005-09-09 | 2006-02-22 | 南京大学 | Video motion goal division and track method |
CN1897015A (en) * | 2006-05-18 | 2007-01-17 | 王海燕 | Method and system for inspecting and tracting vehicle based on machine vision |
CN101394546A (en) * | 2007-09-17 | 2009-03-25 | 华为技术有限公司 | Video target profile tracing method and device |
CN101290681A (en) * | 2008-05-26 | 2008-10-22 | 华为技术有限公司 | Video frequency object tracking method, device and automatic video frequency following system |
CN102521879A (en) * | 2012-01-06 | 2012-06-27 | 肖华 | 2D (two-dimensional) to 3D (three-dimensional) method |
CN103325124A (en) * | 2012-03-21 | 2013-09-25 | 东北大学 | Target detecting and tracking system and method using background differencing method based on FPGA |
CN103607554A (en) * | 2013-10-21 | 2014-02-26 | 无锡易视腾科技有限公司 | Fully-automatic face seamless synthesis-based video synthesis method |
CN104657974A (en) * | 2013-11-25 | 2015-05-27 | 腾讯科技(上海)有限公司 | Image processing method and device |
CN106462975A (en) * | 2014-05-28 | 2017-02-22 | 汤姆逊许可公司 | Method and apparatus for object tracking and segmentation via background tracking |
CN105654508A (en) * | 2015-12-24 | 2016-06-08 | 武汉大学 | Monitoring video moving target tracking method based on self-adaptive background segmentation and system thereof |
CN105930833A (en) * | 2016-05-19 | 2016-09-07 | 重庆邮电大学 | Vehicle tracking and segmenting method based on video monitoring |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107820027A (en) | Video personage dresss up method, apparatus, computing device and computer-readable storage medium | |
CN108109161A (en) | Video data real-time processing method and device based on adaptive threshold fuzziness | |
CN107507155A (en) | Video segmentation result edge optimization real-time processing method, device and computing device | |
CN108111911A (en) | Video data real-time processing method and device based on the segmentation of adaptive tracing frame | |
CN107862277A (en) | Live dress ornament, which is dressed up, recommends method, apparatus, computing device and storage medium | |
CN107483892A (en) | Video data real-time processing method and device, computing device | |
CN107945188A (en) | Personage based on scene cut dresss up method and device, computing device | |
CN107977927A (en) | Stature method of adjustment and device, computing device based on view data | |
CN107665482A (en) | Realize the video data real-time processing method and device, computing device of double exposure | |
CN107644423B (en) | Scene segmentation-based video data real-time processing method and device and computing equipment | |
CN107613161A (en) | Video data handling procedure and device, computing device based on virtual world | |
CN107613360A (en) | Video data real-time processing method and device, computing device | |
CN107563357A (en) | Live dress ornament based on scene cut, which is dressed up, recommends method, apparatus and computing device | |
CN107610149A (en) | Image segmentation result edge optimization processing method, device and computing device | |
CN107766803B (en) | Video character decorating method and device based on scene segmentation and computing equipment | |
CN108171716A (en) | Video personage based on the segmentation of adaptive tracing frame dresss up method and device | |
CN107680105B (en) | Video data real-time processing method and device based on virtual world and computing equipment | |
CN107743263B (en) | Video data real-time processing method and device and computing equipment | |
CN107808372A (en) | Image penetration management method, apparatus, computing device and computer-readable storage medium | |
CN107767391A (en) | Landscape image processing method, device, computing device and computer-readable storage medium | |
CN107770606A (en) | Video data distortion processing method, device, computing device and storage medium | |
CN107563962A (en) | Video data real-time processing method and device, computing device | |
CN107633547A (en) | Realize the view data real-time processing method and device, computing device of scene rendering | |
CN107578369A (en) | Video data handling procedure and device, computing device | |
CN107945201B (en) | Video landscape processing method and device based on self-adaptive threshold segmentation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180508 |
|
RJ01 | Rejection of invention patent application after publication |