CN107146197A - A kind of reduced graph generating method and device - Google Patents

A kind of reduced graph generating method and device Download PDF

Info

Publication number
CN107146197A
CN107146197A CN201710206909.6A CN201710206909A CN107146197A CN 107146197 A CN107146197 A CN 107146197A CN 201710206909 A CN201710206909 A CN 201710206909A CN 107146197 A CN107146197 A CN 107146197A
Authority
CN
China
Prior art keywords
pixel
target
source images
saliency value
salient region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710206909.6A
Other languages
Chinese (zh)
Inventor
王西颖
王琳
聂伟
肖伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing QIYI Century Science and Technology Co Ltd
Original Assignee
Beijing QIYI Century Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing QIYI Century Science and Technology Co Ltd filed Critical Beijing QIYI Century Science and Technology Co Ltd
Priority to CN201710206909.6A priority Critical patent/CN107146197A/en
Publication of CN107146197A publication Critical patent/CN107146197A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the invention discloses a kind of reduced graph generating method and device, method includes:Obtain the source images of thumbnail to be generated;The target saliency value of each pixel in the source images is calculated, wherein, the target saliency value is used to characterize the significance level of pixel in the picture;According to the target saliency value of each pixel, the target salient region of the source images is obtained;According to the target salient region, the thumbnail of the source images is generated.Using the thumbnail of schemes generation provided in an embodiment of the present invention, the important content in prominent source images is realized.

Description

A kind of reduced graph generating method and device
Technical field
The present invention relates to technical field of image processing, more particularly to a kind of reduced graph generating method and device.
Background technology
In recent years, developing rapidly with multimedia technologies such as image, videos, 360 degree of panoramic pictures are progressively popularized, and are started Applied in major multimedia platforms, for example:It is real in VR (virtual reality, virtual reality) helmet It is now one of its main function to 360 degree of browsing for panoramic picture.
When the image resource stored in multimedia equipment is managed, by each image resource in the form of thumbnail It is very necessary to be shown.By thumbnail, user can quickly understand the substance of the resource, while facilitating user It is quick to search required resource.For example:, can be by 360 degree of panoramas in order to accelerate to browse and search speed to 360 degree of panoramic pictures Image is converted into the thumbnail of small size.
At present, the method for generation thumbnail is mainly:Source images are directly contracted to thumbnail dimensions in proportion, to obtain The thumbnail of source images, the thumbnail that profit is generated in this way can not protrude the important content in source images, be not easy to user The quick substance for understanding source images.Accordingly, it would be desirable to a kind of method for generating thumbnail so that what profit was generated in this way Thumbnail can protrude the important content in source images.
The content of the invention
The embodiment of the invention discloses a kind of reduced graph generating method and device, realizing the thumbnail of generation can protrude Important content in source images.Technical scheme is as follows:
In a first aspect, the embodiments of the invention provide a kind of reduced graph generating method, methods described includes:
Obtain the source images of thumbnail to be generated;
The target saliency value of each pixel in the source images is calculated, wherein, the target saliency value is used to characterize The significance level of pixel in the picture;
According to the target saliency value of each pixel, the target salient region of the source images is obtained;
According to the target salient region, the thumbnail of the source images is generated.
Optionally, the step of target saliency value of each pixel in the calculating source images, including:
Using default Face datection algorithm, the human face region in the source images is detected, and calculate positioned at the people The saliency value of each pixel in face region, is used as the first saliency value;
Using default conspicuousness detection algorithm, the saliency value of each pixel in the source images is calculated, second is used as Saliency value;
For each pixel in the human face region, the first saliency value and the second saliency value to the pixel are carried out Weighted calculation, and using result of calculation as the pixel target saliency value;
It is for each pixel beyond human face region described in the source images, the second saliency value of the pixel is true It is set to the target saliency value of the pixel.
Optionally, the target saliency value according to each pixel, obtains the target salient region of the source images, Including:
According to the target saliency value of each pixel, the salient region of the source images is determined;
For each salient region, the average of the target saliency value of pixel in the salient region is calculated, is somebody's turn to do The average saliency value of salient region;
According to the average saliency value obtained, the target salient region of the source images is obtained.
Optionally, it is described according to the average saliency value obtained, obtain the target salient region of the source images, bag Include:
By average saliency value highest salient region, the target salient region of the source images is used as;
Or, average saliency value is more than to the salient region of predetermined threshold value, the target conspicuousness of the source images is used as Region.
Optionally, at least two situation is included for the thumbnail of generation, methods described also includes:
According to the breviary map generalization time, with dynamic display types, each thumbnail is shown successively;Or,
According to the breviary map generalization time, in static state display mode, each thumbnail is shown.
Optionally, it is described according to the target salient region, the thumbnail of the source images is generated, including:
The target salient region is projected into preset coordinate plane, the thumbnail of the source images is generated.
Optionally, the contracting that the target salient region is projected into preset coordinate plane, the source images are generated Sketch map, including:
Distortion transform is carried out to the target salient region, initial pictures are obtained;The initial pictures are projected to pre- If coordinate plane, the thumbnail of the source images is generated;Or
The source images are subjected to distortion transform, the first image is obtained;By in described first image with the first coordinate position Corresponding pixel projects to preset coordinate plane, generates the thumbnail of the source images, wherein, first coordinate position For:The coordinate position of each pixel in the target salient region.
Second aspect, the embodiments of the invention provide a kind of thumbnail generating apparatus, described device includes:
First obtains module, the source images for obtaining thumbnail to be generated;
Computing module, the target saliency value for calculating each pixel in the source images, wherein, the target shows Work value is for characterizing the significance level of pixel in the picture;
Second obtains module, and for the target saliency value according to each pixel, the target for obtaining the source images is notable Property region;
Generation module, for according to the target salient region, generating the thumbnail of the source images.
Optionally, the computing module, specifically for:
Using default Face datection algorithm, the human face region in the source images is detected, and calculate positioned at the people The saliency value of each pixel in face region, is used as the first saliency value;
Using default conspicuousness detection algorithm, the saliency value of each pixel in the source images is calculated, second is used as Saliency value;
For each pixel in the human face region, the first saliency value and the second saliency value to the pixel are carried out Weighted calculation, and using result of calculation as the pixel target saliency value;
It is for each pixel beyond human face region described in the source images, the second saliency value of the pixel is true It is set to the target saliency value of the pixel.
Optionally, described second module is obtained, including:
Determination sub-module, for the target saliency value according to each pixel, determines the salient region of the source images;
First obtains submodule, for for each salient region, calculating the target of pixel in the salient region The average of saliency value, obtains the average saliency value of the salient region;
Second obtains submodule, for according to the average saliency value obtained, obtaining the target conspicuousness of the source images Region.
Optionally, described second submodule is obtained, specifically for:
By average saliency value highest salient region, the target salient region of the source images is used as;
Or, average saliency value is more than to the salient region of predetermined threshold value, the target conspicuousness of the source images is used as Region.
Optionally, at least two situation is included for the thumbnail of generation, described device also includes:
First display module, for according to the breviary map generalization time, with dynamic display types, each breviary to be shown successively Figure;Or,
Second display module, for according to the breviary map generalization time, in static state display mode, shows each thumbnail.
Optionally, the generation module, including:
Submodule is generated, for the target salient region to be projected into preset coordinate plane, the source images are generated Thumbnail.
Optionally, the generation submodule, specifically for:
Distortion transform is carried out to the target salient region, initial pictures are obtained;The initial pictures are projected to pre- If coordinate plane, the thumbnail of the source images is generated;Or
The source images are subjected to distortion transform, the first image is obtained;By in described first image with the first coordinate position Corresponding pixel projects to preset coordinate plane, generates the thumbnail of the source images, wherein, first coordinate position For:The coordinate position of each pixel in the target salient region.
As seen from the above, reduced graph generating method and device provided in an embodiment of the present invention, first, obtain breviary to be generated The source images of figure;Then, the target saliency value of each pixel in the source images is calculated, wherein, the target saliency value For characterizing the significance level of pixel in the picture;And then, according to the target saliency value of each pixel, obtain the source figure The target salient region of picture;Finally, according to the target salient region, the thumbnail of the source images is generated.
It can be seen that, using technical scheme provided in an embodiment of the present invention, source images can be generated according to target salient region Thumbnail, due to, target salient region is obtained according to the target saliency value of each pixel in source images, target show Work value is for characterizing the significance level of pixel in the picture, therefore, and target salient region contains the important interior of source images Hold, the thumbnail generated according to target salient region realizes the important content in prominent source images.
Brief description of the drawings
In order to illustrate more clearly about the embodiment of the present invention or technical scheme of the prior art, below will be to embodiment or existing There is the accompanying drawing used required in technology description to be briefly described, it should be apparent that, drawings in the following description are only this Some embodiments of invention, for those of ordinary skill in the art, on the premise of not paying creative work, can be with Other accompanying drawings are obtained according to these accompanying drawings.
Fig. 1 is a kind of schematic flow sheet of reduced graph generating method provided in an embodiment of the present invention;
Fig. 2 is a kind of structural representation of thumbnail generating apparatus provided in an embodiment of the present invention.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete Site preparation is described, it is clear that described embodiment is only a part of embodiment of the invention, rather than whole embodiments.It is based on Embodiment in the present invention, it is every other that those of ordinary skill in the art are obtained under the premise of creative work is not made Embodiment, belongs to the scope of protection of the invention.
The embodiment of the invention discloses a kind of reduced graph generating method and device, it is described in detail individually below.
Referring to Fig. 1, Fig. 1 is a kind of schematic flow sheet of reduced graph generating method provided in an embodiment of the present invention, including such as Lower step:
S101, obtains the source images of thumbnail to be generated.
It should be noted that the source images can be 360 degree of panoramic pictures, 360 degree of panoramic pictures are also referred to as three-dimensional panorama Figure, panoramic looking-around figure.360 degree of panoramic pictures are exactly the outdoor scene 360 degrees omnidirection image for giving 3 D stereo sensation, 360 degree The thumbnail of panoramic picture is different from the thumbnail of common plane image, it is necessary to look for during the thumbnail of 360 degree of panoramic pictures of generation To an optimal viewing angle, that is, in 360 degree of rotation image browsings of user, it is best able to attract user's concern or is best able to generation The visual angle of the table image, from the visual angle image browsing when, user results in the main contents of 360 degree of panoramic pictures.Therefore, contract Sketch map is alternatively referred to as visual angle figure.
The embodiment of the present invention, to be illustrated exemplified by generating the thumbnail of 360 degree of panoramic pictures, is only one of the present invention Instantiation, does not constitute limitation of the invention.The embodiment of the present invention can be not only used for generating the contracting of 360 degree of panoramic pictures Sketch map, it can also be used to generate thumbnail, thumbnail of fish eye images of common plane image etc..
S102, calculates the target saliency value of each pixel in source images.
Wherein, target saliency value is used to characterize the significance level of pixel in the picture.Typically pixel color, The saliency value (Saliency) of the point, in actual applications, Ke Yixuan are defined as in terms of brightness, direction with the reduced value of background The saliency value as the pixel using pixel color and the reduced value of background color is selected, if pixel color and background color Difference is bigger, then reduced value is bigger, and the saliency value of the point is bigger.
Specifically, the step of calculating the target saliency value of each pixel in source images, can include the following steps:
The first step, using default Face datection algorithm, detects the human face region in source images, and calculate positioned at face The saliency value of each pixel in region, is used as the first saliency value.
Face datection algorithm can be used for detecting the human face region in whole source images, and generally, face is in image The middle notice for comparing the user that can cause browsing pictures, can be by it is therefore contemplated that face is the important content in image Set higher positioned at the saliency value of each pixel of human face region.For example, can be by positioned at each pixel of human face region The saliency value of point is set to 255.
In actual applications, Face datection algorithm can be set according to the demand of designer, and the embodiment of the present invention is to specific Face datection algorithm do not limit.Face datection algorithm is prior art, and the embodiment of the present invention will not be repeated here.For example, Designer can preset the Face datection algorithm based on geometric properties, so that, method provided in an embodiment of the present invention is being performed When, it is possible to use the Face datection algorithm based on geometric properties, to detect the human face region in source images, and calculate positioned at people The saliency value of each pixel in face region.
In a kind of specific embodiment, the method for calculating the saliency value of each pixel positioned at human face region can be with For:255 will be set to positioned at the saliency value of the pixel of the geometric center of human face region, utilize the Gaussian mode based on source images Type, obtain the saliency value of other pixels positioned at human face region spread apart with the central point, wherein, other pixels show Work value reduces as the change with central point distance is big, and the relation of specific saliency value and distance can be set according to user's request Fixed, the embodiment of the present invention is not limited this.The process that Gauss model is set up to image background is prior art, and the present invention is herein Repeat no more.
It is understood that when there is multiple scattered human face regions in source images, then multiple central points can be obtained, can To calculate the saliency value for the other pixels for being once located at human face region respectively according to each central point, by each central point correspondence Saliency value sum be used as the final saliency value of other pixels positioned at human face region.
For example:Central point includes A, B, C, is a according to the central point A saliency value for calculating pixel a1, calculated according to B Pixel a saliency value is a2, it is a according to the C saliency value for calculating pixel a3, then pixel a saliency value be:a1+a2+ a3
Second step, using default conspicuousness detection algorithm, calculates the saliency value of each pixel in source images, is used as the Two saliency value.
Conspicuousness detection algorithm is used for the saliency value for calculating the pixel of each in source images, specific conspicuousness detection algorithm Belong to prior art, the embodiment of the present invention will not be repeated here.Designer can design conspicuousness detection algorithm according to demand, this Inventive embodiments are not limited specific conspicuousness detection algorithm.For example, designer can be preset based on global color contrast Conspicuousness detection algorithm so that, method provided in an embodiment of the present invention when executed, can be according to based on global color pair The conspicuousness detection algorithm of ratio, to calculate the saliency value of the pixel of each in source images.
It should be noted that the embodiment of the present invention is not limited the execution sequence of the first step and second step, can first it hold The row first step, then performs second step;Second step can also be first carried out, the first step is then performed;Can be with the first step and second Step is performed side by side.
3rd step, it is aobvious to the first saliency value of the pixel and second for each pixel in the human face region Work value is weighted, and using result of calculation as the pixel target saliency value.
In actual applications, designer can design the first saliency value and the respective power of the second saliency value according to experience Repeated factor, the embodiment of the present invention is not limited the occurrence of weight factor.
For example, the first saliency value and the second saliency value positioned at the pixel A of human face region are respectively:50th, 40, first shows The weight factor of work value is 0.6, and the weight factor of the second saliency value is 0.4, then aobvious to pixel A the first saliency value and second Work value is weighted, and the target saliency value for obtaining pixel A is:50*0.6+40*0.4=46.
4th step, for each pixel beyond human face region described in the source images, by the second of the pixel Saliency value is defined as the target saliency value of the pixel.
It is understood that the target saliency value of each pixel in source images beyond human face region, only comprising second Saliency value, for example, the second saliency value of pixel B described in source images beyond human face region is 66, then pixel B target Saliency value is 66.
It should be noted that the embodiment of the present invention is not limited the execution sequence of the 3rd step and the 4th step, can first it hold The step of row the 3rd, then performs the 4th step;The 4th step can also be first carried out, the 3rd step is then performed;Can be with the 3rd step and the 4th Step is performed simultaneously.
S103, according to the target saliency value of each pixel, obtains the target salient region of source images.
Salient region refers to the region that can be come into the picture in image, is also the area for best embodying picture material Domain.Salient region of concern generally exists only in some regional area in image in image, and remaining is largely to be non- Salient region, and have obvious border between the two, using the target saliency value of each pixel, it can distinguish significantly Property region and non-limiting region.
It should be noted that according to the target saliency value of each pixel, obtaining the target conspicuousness area of the source images Domain, Ke Yiwei:According to the target saliency value of each pixel, the salient region of the source images is determined;For each notable Property region, calculate the average of the target saliency value of pixel in the salient region, obtain the average notable of the salient region Value;According to the average saliency value obtained, the target salient region of the source images is obtained.
In actual applications, according to the target saliency value of each pixel, the salient region of the source images is determined Step, Ke Yiwei:One or more central pixel points are selected from source images, will be with center pixel using largest connected domain method The pixel that point belongs in the range of a saliency value constitutes a salient region, can be by target saliency value and central pixel point Pixel of the difference in preset difference value of target saliency value be defined as:Belong to central pixel point in the range of a saliency value Pixel.Specific largest connected domain method is prior art, and the embodiment of the present invention will not be repeated here.
For example, preset difference value is 3, central pixel point A target saliency value is 30, and pixel B target saliency value is 35, Pixel C target saliency value is 32, then the difference of pixel B and central pixel point A target saliency value is 5, and pixel C is with The difference of imago vegetarian refreshments A target saliency value is 2, the difference of pixel B target saliency value and central pixel point A target saliency value Beyond preset difference value, pixel B and central pixel point A is not belonging to a saliency value scope;Pixel C target saliency value is with The difference of imago vegetarian refreshments A target saliency value is in preset difference value, and pixel C and central pixel point A belongs to a saliency value scope.
Specifically, according to the average saliency value obtained, obtaining the target salient region of the source images, Ke Yiwei: By average saliency value highest salient region, the target salient region of the source images is used as;Or, by average saliency value More than the salient region of predetermined threshold value, the target salient region of the source images is used as.
For example, salient region has tri- regions of A, B, C, salient region A average saliency value is 55, conspicuousness area Domain B average saliency value is 40, and salient region A average saliency value is 30, then can be notable by average saliency value highest Property region A, is used as the target salient region of the source images;Or, predetermined threshold value is 35, and average saliency value is more than in advance If the salient region A and B of threshold value, are used as the target salient region of the source images.
S104, according to target salient region, generates the thumbnail of source images.
Specifically, according to target salient region, generating the thumbnail of source images, Ke Yiwei:By target salient region Preset coordinate plane is projected to, the thumbnail of source images is generated.
It is understood that coordinate includes spherical coordinate, cylindrical coordinates, cartesian coordinate etc., wherein, human eye can be felt The coordinate for the image known is spherical coordinate or cylindrical coordinates, when source images are to shoot obtained fish eye images with fish eye lens When, can be by cylindrical surface projecting method or spherical projection method, by fish due to more serious, the poor visual effect of fish eye images distortion Eye pattern picture projects to cylindrical coordinates or spherical coordinate, to eliminate the distortion effects of fish eye images.
Specific cylindrical surface projecting method and spherical projection method are prior art, and the embodiment of the present invention will not be repeated here.If Meter person can select sciagraphy according to self-demand, and the embodiment of the present invention is not limited this.For example, designer can select profit Spherical projection method is used, target salient region is projected into spherical coordinate.
Specifically, when the embodiment of the present invention is applied into VR (Virtual Reality, virtual reality) equipment, can be with According to the HMD (Head Mount Display, head mounted display) of VR equipment FOV (Field Of View, the angle of visual field) and Lens parameter, preset coordinate plane is projected to by target salient region, generates the thumbnail of source images.
Wherein, FOV is used to control the wide and height of generation thumbnail every time, and FOV is bigger, then the width and Gao Yue great of thumbnail, In actual applications, the wide and high mapping table of FOV and thumbnail can be pre-established, according to the mapping table, is obtained The wide and height of the corresponding thumbnails of each FOV is obtained, so that so that the FOV size phases being sized to HMD of the thumbnail of generation Adapt to, further increase user's visual experience.
Because the camera of VR equipment is fish eye lens.Thus, it is also fish eye images that obtained image is shot with VR equipment. Lens parameter is mainly the distortion factor of fish eye images, is projected with reference to distortion factor, can be more accurately fish eye images Block diagram picture or spherical diagram picture are transformed into, the definition of thumbnail is further improved.In actual applications, it is possible to use Zhang Shi Standardization calculates distortion factor, and the specific method for calculating distortion factor using Zhang Shi standardizations belongs to prior art, the present invention Embodiment will not be repeated here.
Further, in order to improve the visual experience of user, distortion transform can also be carried out to fish eye images, to eliminate fish The distortion effects of eye pattern picture, are become the image for being easy to human eye to perceive.
For example, in a kind of specific embodiment, target salient region is projected into preset coordinate plane, source is generated The thumbnail of image, Ke Yiwei:Distortion transform is carried out to the target salient region, initial pictures are obtained;Will be described initial Image projection generates the thumbnail of the source images to preset coordinate plane.
In another specific embodiment, target salient region is projected into preset coordinate plane, source figure is generated The thumbnail of picture, can also be:The source images are subjected to distortion transform, the first image is obtained;By in described first image with The corresponding pixel of first coordinate position projects to preset coordinate plane, generates the thumbnail of the source images, wherein, described the One coordinate position is:The coordinate position of each pixel in the target salient region.
, it is necessary to calculate distortion factor first when carrying out distortion transform, and then realized using distortion factor with producing distortion phase Anti- conversion, so as to eliminate distortion effects.The method of specific distortion transform belongs to prior art, and the embodiment of the present invention is herein not Repeat again.
It can be seen that, using technical scheme provided in an embodiment of the present invention, source images can be generated according to target salient region Thumbnail, due to, target salient region is obtained according to the target saliency value of each pixel in source images, target show Work value is for characterizing the significance level of pixel in the picture, therefore, and target salient region contains the important interior of source images Hold, the thumbnail generated according to target salient region realizes the important content in prominent source images.
Further, in order to strengthen the display effect of thumbnail, Consumer's Experience is better met, for the thumbnail of generation Including at least two situation, methods described can also include:
According to the breviary map generalization time, with dynamic display types, each thumbnail is shown successively;Or,
According to the breviary map generalization time, in static state display mode, each thumbnail is shown.
Wherein, dynamic display types can be with GIF (Graphics Interchange Format, image exchange lattice Formula) etc. cardon mode, each thumbnail is shown successively;Or, can be the fixed duration in interval, circulation successively shows individual breviary Figure.
Static state display mode can be directly display one in generated thumbnail, multiple or all, directly showing In the case of showing one or multiple thumbnails, other thumbnails can be shown according to the thumbnail switching command switching of user.
Corresponding with above-mentioned embodiment of the method, the embodiment of the present invention also provides a kind of thumbnail generating apparatus.
Referring to Fig. 2, a kind of structural representation for thumbnail generating apparatus that Fig. 2 is provided by the embodiment of the present invention, including:
First obtains module 201, the source images for obtaining thumbnail to be generated;
Computing module 202, the target saliency value for calculating each pixel in the source images, wherein, the mesh Mark saliency value is used to characterize the significance level of pixel in the picture;
Second obtains module 203, and for the target saliency value according to each pixel, the target for obtaining the source images shows Work property region;
Generation module 204, for according to the target salient region, generating the thumbnail of the source images.
The computing module 202, specifically for:
Using default Face datection algorithm, the human face region in the source images is detected, and calculate positioned at the people The saliency value of each pixel in face region, is used as the first saliency value;
Using default conspicuousness detection algorithm, the saliency value of each pixel in the source images is calculated, second is used as Saliency value;
For each pixel in the human face region, the first saliency value and the second saliency value to the pixel are carried out Weighted calculation, and using result of calculation as the pixel target saliency value;
It is for each pixel beyond human face region described in the source images, the second saliency value of the pixel is true It is set to the target saliency value of the pixel.
Described second obtains module 203, including:
Determination sub-module, for the target saliency value according to each pixel, determines the salient region of the source images;
First obtains submodule, for for each salient region, calculating the target of pixel in the salient region The average of saliency value, obtains the average saliency value of the salient region;
Second obtains submodule, for according to the average saliency value obtained, obtaining the target conspicuousness of the source images Region.
Described second obtains submodule, specifically for:
By average saliency value highest salient region, the target salient region of the source images is used as;
Or, average saliency value is more than to the salient region of predetermined threshold value, the target conspicuousness of the source images is used as Region.
The generation module 204, including:
Submodule is generated, for the target salient region to be projected into preset coordinate plane, the source images are generated Thumbnail.
The generation submodule, specifically for:
Distortion transform is carried out to the target salient region, initial pictures are obtained;The initial pictures are projected to pre- If coordinate plane, the thumbnail of the source images is generated;Or
The source images are subjected to distortion transform, the first image is obtained;By in described first image with the first coordinate position Corresponding pixel projects to preset coordinate plane, generates the thumbnail of the source images, wherein, first coordinate position For:The coordinate position of each pixel in the target salient region.
It can be seen that, using technical scheme provided in an embodiment of the present invention, source images can be generated according to target salient region Thumbnail, due to, target salient region is obtained according to the target saliency value of each pixel in source images, target show Work value is for characterizing the significance level of pixel in the picture, therefore, and target salient region contains the important interior of source images Hold, the thumbnail generated according to target salient region realizes the important content in prominent source images.
Further, in order to strengthen the display effect of thumbnail, Consumer's Experience is better met, for the thumbnail of generation Including at least two situation, described device also includes:
First display module, for according to the breviary map generalization time, with dynamic display types, each breviary to be shown successively Figure;Or,
Second display module, for according to the breviary map generalization time, in static state display mode, shows each thumbnail.
It should be noted that herein, such as first and second or the like relational terms are used merely to a reality Body or operation make a distinction with another entity or operation, and not necessarily require or imply these entities or deposited between operating In any this actual relation or order.Moreover, term " comprising ", "comprising" or its any other variant are intended to Nonexcludability is included, so that process, method, article or equipment including a series of key elements not only will including those Element, but also other key elements including being not expressly set out, or also include being this process, method, article or equipment Intrinsic key element.In the absence of more restrictions, the key element limited by sentence "including a ...", it is not excluded that Also there is other identical element in process, method, article or equipment including the key element.
Each embodiment in this specification is described by the way of related, identical similar portion between each embodiment Divide mutually referring to what each embodiment was stressed is the difference with other embodiment.It is real especially for device Apply for example, because it is substantially similar to embodiment of the method, so description is fairly simple, related part is referring to embodiment of the method Part explanation.
Can one of ordinary skill in the art will appreciate that realizing that all or part of step in above method embodiment is To instruct the hardware of correlation to complete by program, described program can be stored in computer read/write memory medium, The storage medium designated herein obtained, such as:ROM/RAM, magnetic disc, CD etc..
The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the scope of the present invention.It is all Any modification, equivalent substitution and improvements made within the spirit and principles in the present invention etc., are all contained in protection scope of the present invention It is interior.

Claims (14)

1. a kind of reduced graph generating method, it is characterised in that methods described includes:
Obtain the source images of thumbnail to be generated;
The target saliency value of each pixel in the source images is calculated, wherein, the target saliency value is used to characterize pixel The significance level of point in the picture;
According to the target saliency value of each pixel, the target salient region of the source images is obtained;
According to the target salient region, the thumbnail of the source images is generated.
2. according to the method described in claim 1, it is characterised in that the mesh for calculating each pixel in the source images The step of marking saliency value, including:
Using default Face datection algorithm, the human face region in the source images is detected, and calculate positioned at the face area The saliency value of each pixel in domain, is used as the first saliency value;
Using default conspicuousness detection algorithm, the saliency value of each pixel in the source images is calculated, it is notable as second Value;
For each pixel in the human face region, the first saliency value and the second saliency value to the pixel are weighted Calculate, and using result of calculation as the pixel target saliency value;
For each pixel beyond human face region described in the source images, the second saliency value of the pixel is defined as The target saliency value of the pixel.
3. according to the method described in claim 1, it is characterised in that the target saliency value according to each pixel, obtain The target salient region of the source images, including:
According to the target saliency value of each pixel, the salient region of the source images is determined;
For each salient region, the average of the target saliency value of pixel in the salient region is calculated, this is obtained notable The average saliency value in property region;
According to the average saliency value obtained, the target salient region of the source images is obtained.
4. method according to claim 3, it is characterised in that described according to the average saliency value obtained, obtains described The target salient region of source images, including:
By average saliency value highest salient region, the target salient region of the source images is used as;
Or, average saliency value is more than to the salient region of predetermined threshold value, the target salient region of the source images is used as.
5. according to the method described in claim 1, it is characterised in that include at least two situation for the thumbnail of generation, Methods described also includes:
According to the breviary map generalization time, with dynamic display types, each thumbnail is shown successively;Or,
According to the breviary map generalization time, in static state display mode, each thumbnail is shown.
6. according to the method described in claim 1, it is characterised in that described according to the target salient region, generation is described The thumbnail of source images, including:
The target salient region is projected into preset coordinate plane, the thumbnail of the source images is generated.
7. method according to claim 6, it is characterised in that described that the target salient region is projected into default seat Plane is marked, the thumbnail of the source images is generated, including:
Distortion transform is carried out to the target salient region, initial pictures are obtained;The initial pictures are projected into default seat Plane is marked, the thumbnail of the source images is generated;Or
The source images are subjected to distortion transform, the first image is obtained;Will be corresponding with the first coordinate position in described first image Pixel project to preset coordinate plane, generate the thumbnail of the source images, wherein, first coordinate position is:Institute State the coordinate position of each pixel in target salient region.
8. a kind of thumbnail generating apparatus, it is characterised in that described device includes:
First obtains module, the source images for obtaining thumbnail to be generated;
Computing module, the target saliency value for calculating each pixel in the source images, wherein, the target saliency value For characterizing the significance level of pixel in the picture;
Second obtains module, for the target saliency value according to each pixel, obtains the target conspicuousness area of the source images Domain;
Generation module, for according to the target salient region, generating the thumbnail of the source images.
9. device according to claim 8, it is characterised in that the computing module, specifically for:
Using default Face datection algorithm, the human face region in the source images is detected, and calculate positioned at the face area The saliency value of each pixel in domain, is used as the first saliency value;
Using default conspicuousness detection algorithm, the saliency value of each pixel in the source images is calculated, it is notable as second Value;
For each pixel in the human face region, the first saliency value and the second saliency value to the pixel are weighted Calculate, and using result of calculation as the pixel target saliency value;
For each pixel beyond human face region described in the source images, the second saliency value of the pixel is defined as The target saliency value of the pixel.
10. device according to claim 8, it is characterised in that described second obtains module, including:
Determination sub-module, for the target saliency value according to each pixel, determines the salient region of the source images;
First obtains submodule, for for each salient region, the target for calculating pixel in the salient region to be notable The average of value, obtains the average saliency value of the salient region;
Second obtains submodule, for according to the average saliency value obtained, obtaining the target salient region of the source images.
11. device according to claim 10, it is characterised in that described second obtains submodule, specifically for:
By average saliency value highest salient region, the target salient region of the source images is used as;
Or, average saliency value is more than to the salient region of predetermined threshold value, the target salient region of the source images is used as.
12. device according to claim 8, it is characterised in that include at least two situation for the thumbnail of generation, Described device also includes:
First display module, for according to the breviary map generalization time, with dynamic display types, each thumbnail to be shown successively; Or,
Second display module, for according to the breviary map generalization time, in static state display mode, shows each thumbnail.
13. device according to claim 8, it is characterised in that the generation module, including:
Submodule is generated, for the target salient region to be projected into preset coordinate plane, the contracting of the source images is generated Sketch map.
14. device according to claim 13, it is characterised in that the generation submodule, specifically for:
Distortion transform is carried out to the target salient region, initial pictures are obtained;The initial pictures are projected into default seat Plane is marked, the thumbnail of the source images is generated;Or
The source images are subjected to distortion transform, the first image is obtained;Will be corresponding with the first coordinate position in described first image Pixel project to preset coordinate plane, generate the thumbnail of the source images, wherein, first coordinate position is:Institute State the coordinate position of each pixel in target salient region.
CN201710206909.6A 2017-03-31 2017-03-31 A kind of reduced graph generating method and device Pending CN107146197A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710206909.6A CN107146197A (en) 2017-03-31 2017-03-31 A kind of reduced graph generating method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710206909.6A CN107146197A (en) 2017-03-31 2017-03-31 A kind of reduced graph generating method and device

Publications (1)

Publication Number Publication Date
CN107146197A true CN107146197A (en) 2017-09-08

Family

ID=59783924

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710206909.6A Pending CN107146197A (en) 2017-03-31 2017-03-31 A kind of reduced graph generating method and device

Country Status (1)

Country Link
CN (1) CN107146197A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108958609A (en) * 2018-07-24 2018-12-07 百度在线网络技术(北京)有限公司 Generation method, device, storage medium and the terminal device of three-dimensional panorama surface plot
CN110045892A (en) * 2019-04-19 2019-07-23 维沃移动通信有限公司 Display methods and terminal device
CN110097539A (en) * 2019-04-19 2019-08-06 贝壳技术有限公司 A kind of method and device intercepting picture in virtual three-dimensional model
CN110349082A (en) * 2019-06-28 2019-10-18 腾讯科技(深圳)有限公司 Method of cutting out and device, the storage medium and electronic device of image-region
CN110377204A (en) * 2019-06-30 2019-10-25 华为技术有限公司 A kind of method and electronic equipment generating user's head portrait
CN110910470A (en) * 2019-11-11 2020-03-24 广联达科技股份有限公司 Method and device for generating high-quality thumbnail
CN112634128A (en) * 2020-12-22 2021-04-09 天津大学 Stereo image redirection method based on deep learning
CN115033154A (en) * 2021-02-23 2022-09-09 北京小米移动软件有限公司 Thumbnail generation method, thumbnail generation device and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20090076022A (en) * 2008-01-07 2009-07-13 엘지전자 주식회사 Apparatus for generating a thumb-nail and multi-codec decoder apparatus including the same
CN101620673A (en) * 2009-06-18 2010-01-06 北京航空航天大学 Robust face detecting and tracking method
CN104537616A (en) * 2014-12-20 2015-04-22 中国科学院西安光学精密机械研究所 Correction method for fisheye image distortion
CN106251283A (en) * 2016-07-28 2016-12-21 乐视控股(北京)有限公司 A kind of reduced graph generating method and equipment
CN106485653A (en) * 2016-10-19 2017-03-08 上海传英信息技术有限公司 User terminal and the generation method of panoramic pictures dynamic thumbnail

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20090076022A (en) * 2008-01-07 2009-07-13 엘지전자 주식회사 Apparatus for generating a thumb-nail and multi-codec decoder apparatus including the same
CN101620673A (en) * 2009-06-18 2010-01-06 北京航空航天大学 Robust face detecting and tracking method
CN104537616A (en) * 2014-12-20 2015-04-22 中国科学院西安光学精密机械研究所 Correction method for fisheye image distortion
CN106251283A (en) * 2016-07-28 2016-12-21 乐视控股(北京)有限公司 A kind of reduced graph generating method and equipment
CN106485653A (en) * 2016-10-19 2017-03-08 上海传英信息技术有限公司 User terminal and the generation method of panoramic pictures dynamic thumbnail

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
程德俊: "汽车全景环视***的研究", 《中国优秀硕士学位论文全文数据库》 *

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108958609A (en) * 2018-07-24 2018-12-07 百度在线网络技术(北京)有限公司 Generation method, device, storage medium and the terminal device of three-dimensional panorama surface plot
CN110045892A (en) * 2019-04-19 2019-07-23 维沃移动通信有限公司 Display methods and terminal device
CN110097539A (en) * 2019-04-19 2019-08-06 贝壳技术有限公司 A kind of method and device intercepting picture in virtual three-dimensional model
CN110045892B (en) * 2019-04-19 2021-04-02 维沃移动通信有限公司 Display method and terminal equipment
CN110349082A (en) * 2019-06-28 2019-10-18 腾讯科技(深圳)有限公司 Method of cutting out and device, the storage medium and electronic device of image-region
CN110349082B (en) * 2019-06-28 2023-08-29 腾讯科技(深圳)有限公司 Image area clipping method and device, storage medium and electronic device
WO2021000841A1 (en) * 2019-06-30 2021-01-07 华为技术有限公司 Method for generating user profile photo, and electronic device
CN110377204B (en) * 2019-06-30 2021-07-09 华为技术有限公司 Method for generating user head portrait and electronic equipment
CN110377204A (en) * 2019-06-30 2019-10-25 华为技术有限公司 A kind of method and electronic equipment generating user's head portrait
US11914850B2 (en) 2019-06-30 2024-02-27 Huawei Technologies Co., Ltd. User profile picture generation method and electronic device
CN110910470A (en) * 2019-11-11 2020-03-24 广联达科技股份有限公司 Method and device for generating high-quality thumbnail
CN110910470B (en) * 2019-11-11 2023-07-07 广联达科技股份有限公司 Method and device for generating high-quality thumbnail
CN112634128A (en) * 2020-12-22 2021-04-09 天津大学 Stereo image redirection method based on deep learning
CN112634128B (en) * 2020-12-22 2022-06-14 天津大学 Stereo image redirection method based on deep learning
CN115033154A (en) * 2021-02-23 2022-09-09 北京小米移动软件有限公司 Thumbnail generation method, thumbnail generation device and storage medium

Similar Documents

Publication Publication Date Title
CN107146197A (en) A kind of reduced graph generating method and device
Attal et al. MatryODShka: Real-time 6DoF video view synthesis using multi-sphere images
TWI709107B (en) Image feature extraction method and saliency prediction method including the same
Buehler et al. Non-metric image-based rendering for video stabilization
Lin et al. Surfaces with occlusions from layered stereo
CN111243071A (en) Texture rendering method, system, chip, device and medium for real-time three-dimensional human body reconstruction
US11004267B2 (en) Information processing apparatus, information processing method, and storage medium for generating a virtual viewpoint image
Wei et al. Fisheye video correction
JP4916548B2 (en) Establish and use dominant lines of images
CN103942754B (en) Panoramic picture complementing method and device
CN108876718B (en) Image fusion method and device and computer storage medium
Li et al. A geodesic-preserving method for image warping
CN106875431A (en) Picture charge pattern method and augmented reality implementation method with moving projection
CN107358609B (en) Image superposition method and device for augmented reality
CN111866523B (en) Panoramic video synthesis method and device, electronic equipment and computer storage medium
US20220198737A1 (en) Method and device for displaying details of a texture of a three-dimensional object
Simon Tracking-by-synthesis using point features and pyramidal blurring
CN107145224A (en) Human eye sight tracking and device based on three-dimensional sphere Taylor expansion
CN110245199A (en) A kind of fusion method of high inclination-angle video and 2D map
US20230342973A1 (en) Image processing method and apparatus, device, storage medium, and computer program product
Ha et al. Embedded panoramic mosaic system using auto-shot interface
Zhou et al. MR video fusion: interactive 3D modeling and stitching on wide-baseline videos
US9602708B2 (en) Rectified stereoscopic 3D panoramic picture
Yan et al. Seamless stitching of stereo images for generating infinite panoramas
CN115564639A (en) Background blurring method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20170908

RJ01 Rejection of invention patent application after publication