CN107613383A - Video volume adjusting method, device and electronic installation - Google Patents
Video volume adjusting method, device and electronic installation Download PDFInfo
- Publication number
- CN107613383A CN107613383A CN201710812122.4A CN201710812122A CN107613383A CN 107613383 A CN107613383 A CN 107613383A CN 201710812122 A CN201710812122 A CN 201710812122A CN 107613383 A CN107613383 A CN 107613383A
- Authority
- CN
- China
- Prior art keywords
- dynamic
- active user
- video
- multiframe
- dynamic object
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
Abstract
The invention discloses a kind of video volume adjusting method, device and electronic installation, wherein method includes:By the depth data for obtaining dynamic object in each two field picture of dynamic video embedded in 3D backgrounds;Obtain the multiframe depth image of active user;According to the depth data of dynamic object in each two field picture of multiframe depth image and dynamic video of active user, determine the situation of change of distance between active user and dynamic object, and then the volume of dynamic video is adjusted according to situation of change, can according to the situation of user it is personalized the dynamic video with suitable volume is provided, the playing efficiency of dynamic video is improved, improves the dynamic video viewing experience of user.
Description
Technical field
The present invention relates to technical field of image processing, more particularly to a kind of video volume adjusting method, device and electronics dress
Put.
Background technology
Existing video volume regulation technology is, it is necessary to which user clicks on corresponding volume button, or pulls volume slide block
To adjust the volume of dynamic video, can not according to the situation of user it is personalized the dynamic video of suitable volume is provided, reduce dynamic
State video playback efficiency, influences dynamic video viewing effect.
The content of the invention
The embodiment provides a kind of video volume adjusting method, device and electronic installation.
The video volume adjusting method of embodiment of the present invention includes:
Obtain dynamic video embedded in 3D backgrounds, and multiframe depth image corresponding to the dynamic video;
According to the multiframe depth image, each two field picture in the dynamic video is handled, obtains each two field picture
The depth data of middle dynamic object;
Obtain the multiframe depth image of active user;
According to the depth of dynamic object in the multiframe depth image of the active user and each two field picture of the dynamic video
Degrees of data, determine the situation of change of distance between the active user and the dynamic object;
According to the situation of change of distance between the active user and the dynamic object, to the volume of the dynamic video
It is adjusted.
The video volume adjusting means of embodiment of the present invention, including depth image acquisition component and processor.The place
Reason device is used for, and obtains dynamic video embedded in 3D backgrounds, and multiframe depth image corresponding to the dynamic video;According to institute
Multiframe depth image is stated, each two field picture in the dynamic video is handled, obtains the depth of dynamic object in each two field picture
Degrees of data.The depth image acquisition component is used for the multiframe depth image for obtaining active user.The processor is additionally operable to, root
According to the depth data of dynamic object in the multiframe depth image and each two field picture of the dynamic video of the active user, it is determined that
The situation of change of distance between the active user and the dynamic object;According to the active user and the dynamic object it
Between distance situation of change, the volume of the dynamic video is adjusted.
The electronic installation of embodiment of the present invention includes one or more processors, memory and one or more programs.
Wherein one or more of programs are stored in the memory, and are configured to by one or more of processors
Perform, described program includes being used for the instruction for performing above-mentioned video volume adjusting method.
The computer-readable recording medium of embodiment of the present invention includes what is be used in combination with the electronic installation that can be imaged
Computer program, the computer program can be executed by processor to complete above-mentioned video volume adjusting method.
Video volume adjusting method, video volume adjusting means, electronic installation and the computer of embodiment of the present invention can
Storage medium is read by obtaining in 3D backgrounds the depth data of dynamic object in embedded each two field picture of dynamic video;Obtain current
The multiframe depth image of user;According to dynamic object in each two field picture of multiframe depth image and dynamic video of active user
Depth data, the situation of change of distance between active user and dynamic object is determined, and then according to situation of change to dynamic video
Volume be adjusted, can according to the situation of user it is personalized the dynamic video with suitable volume is provided, improve dynamic
The playing efficiency of video, improve the dynamic video viewing experience of user.
The additional aspect and advantage of the present invention will be set forth in part in the description, and will partly become from the following description
Obtain substantially, or recognized by the practice of the present invention.
Brief description of the drawings
Of the invention above-mentioned and/or additional aspect and advantage will become from the following description of the accompanying drawings of embodiments
Substantially and it is readily appreciated that, wherein:
Fig. 1 is the schematic flow sheet of the video volume adjusting method of some embodiments of the present invention.
Fig. 2 is the module diagram of the video volume adjusting means of some embodiments of the present invention.
Fig. 3 is the structural representation of the electronic installation of some embodiments of the present invention.
Fig. 4 is the schematic flow sheet of the video volume adjusting method of some embodiments of the present invention.
Fig. 5 is the schematic flow sheet of the video volume adjusting method of some embodiments of the present invention.
Fig. 6 (a) to Fig. 6 (e) is the schematic diagram of a scenario of structural light measurement according to an embodiment of the invention.
Fig. 7 (a) and Fig. 7 (b) structural light measurements according to an embodiment of the invention schematic diagram of a scenario.
Fig. 8 is the schematic flow sheet of the video volume adjusting method of some embodiments of the present invention.
Fig. 9 is the module diagram of the electronic installation of some embodiments of the present invention.
Figure 10 is the module diagram of the electronic installation of some embodiments of the present invention.
Embodiment
Embodiments of the invention are described below in detail, the example of the embodiment is shown in the drawings, wherein from beginning to end
Same or similar label represents same or similar element or the element with same or like function.Below with reference to attached
The embodiment of figure description is exemplary, it is intended to for explaining the present invention, and is not considered as limiting the invention.
Also referring to Fig. 1 to 2, the video volume adjusting method of embodiment of the present invention is used for electronic installation 1000.Depending on
Frequency volume adjusting method includes:
S101, obtain dynamic video embedded in 3D backgrounds, and multiframe depth image corresponding to dynamic video.
S102, according to multiframe depth image, each two field picture in dynamic video is handled, obtains in each two field picture and moves
The depth data of state object.
S103, the multiframe depth image for obtaining active user.
S104, the depth number according to dynamic object in each two field picture of multiframe depth image and dynamic video of active user
According to determining the situation of change of distance between active user and dynamic object.
S105, the situation of change according to distance between active user and dynamic object, are adjusted to the volume of dynamic video
Section.
Referring to Fig. 3, the video volume adjusting method of embodiment of the present invention can be by the video of embodiment of the present invention
Volume adjustment device 100 is realized.The video volume adjusting means 100 of embodiment of the present invention is used for electronic installation 1000.Video
Volume adjustment device 100 includes depth image acquisition component 12 and processor 20.In the present embodiment, step 101, step 102, step
Rapid 104, step 105 can be realized by processor 20, and step 103 can be realized by depth image acquisition component 12.In addition, video
Volume adjustment device 100 can also include visible image capturing first 11, for obtaining dynamic video embedded in 3D backgrounds.
That is, processor 20 can be used for obtaining dynamic video embedded in 3D backgrounds, and corresponding to dynamic video
Multiframe depth image;According to multiframe depth image, each two field picture in dynamic video is handled, obtains in each two field picture and moves
The depth data of state object;Depth image acquisition component 12 can be used for the multiframe depth image for obtaining active user;, processor 20
It can be additionally used in the depth data of dynamic object in each two field picture of multiframe depth image and dynamic video according to active user, really
Determine the situation of change of distance between active user and dynamic object;According to the change feelings of distance between active user and dynamic object
Condition, the volume of dynamic video is adjusted.
In the present embodiment, one or more dynamic video can be embedded in 3D backgrounds, video volume adjusting means can be with
The depth data of dynamic object in each two field picture of one or more dynamic video is obtained, according to one or more dynamic video
The depth data of middle dynamic object and the multiframe depth image of active user, are adjusted to the volume of each dynamic video.
In the present embodiment, step 102 can specifically include:Each two field picture in dynamic video is identified, obtained each
Dynamic object region in two field picture;Depth corresponding with dynamic object region is obtained from depth image corresponding to each two field picture
Data.
Specifically, video volume adjusting means can obtain each two field picture in dynamic video, every two field picture is known
Not, obtain per the characteristic point in frame picture, the characteristic point of dynamic object of the characteristic point in every frame picture with prestoring is compared
It is right, obtain per the region where dynamic object in frame picture, the region according to where dynamic object is from corresponding depth image
Obtain the depth data of dynamic object.
In the present embodiment, step 105 can specifically include:If the distance between active user and dynamic object increase,
Reduce the volume of dynamic video;If the distance between active user and dynamic object reduce, increase the volume of dynamic video.
The video volume adjusting means 100 of embodiment of the present invention can apply to the electronic installation of embodiment of the present invention
1000.In other words, the electronic installation 1000 of embodiment of the present invention includes the video volume regulation dress of embodiment of the present invention
Put 100.
In some embodiments, electronic installation 1000 includes mobile phone, tablet personal computer, notebook computer, Intelligent bracelet, intelligence
Energy wrist-watch, intelligent helmet, intelligent glasses etc..
The video volume adjusting method of embodiment of the present invention, by obtaining each frame figure of dynamic video embedded in 3D backgrounds
The depth data of dynamic object as in;Obtain the multiframe depth image of active user;According to the multiframe depth image of active user
And in each two field picture of dynamic video dynamic object depth data, determine the change of distance between active user and dynamic object
Situation, and then the volume of dynamic video is adjusted according to situation of change, can according to the situation of user it is personalized provide
Dynamic video with suitable volume, the playing efficiency of dynamic video is improved, improve the dynamic video viewing experience of user.
Referring to Fig. 4, in some embodiments, step 103 specifically may comprise steps of:
S1031, to active user's projective structure light.
The multiframe structure light image that S1032, shooting are modulated through active user.
S1033, phase information corresponding to each pixel of multiframe structure light image is demodulated to obtain multiframe depth image.
Referring again to Fig. 3, in some embodiments, depth image acquisition component 12 includes the He of structured light projector 121
Structure light video camera head 122.Step 1031 can be realized that step 1032 and step 1033 can be by tying by structured light projector 121
Structure light video camera head 122 is realized.
In other words, structured light projector 121 can be used for active user's projective structure light;Structure light video camera head 122 can
For shooting the multiframe structure light image modulated through active user;Demodulate phase corresponding to each pixel of multiframe structure light image
Information is to obtain multiframe depth image.
Specifically, structured light projector 121 is by the face or body of the project structured light of certain pattern to active user
Afterwards, the structure light image after being modulated by active user can be formed in the face of active user or the surface of body.Structure light images
Structure light image after first 122 shooting is modulated, then structure light image is demodulated to obtain depth image.Wherein, structure
The pattern of light can be laser stripe, Gray code, sine streak, non-homogeneous speckle etc..
Referring to Fig. 5, in some embodiments, corresponding to each pixel of step 1033 demodulation multiframe structure light image
Phase information specifically may comprise steps of with obtaining the process of multiframe depth image:
S10331, for each frame structure light image in multiframe structure light image, each pixel in demodulation structure light image
Corresponding phase information.
S10332, phase information is converted into depth information.
S10333, according to depth information generate depth image.
Referring again to Fig. 2, in some embodiments, step 10331, step 10332 and step 10333 can be by tying
Structure light video camera head 122 is realized.
In other words, structure light video camera head 122 can be further used for each frame structure light being directed in multiframe structure light image
Image, phase information corresponding to each pixel in demodulation structure light image;Phase information is converted into depth information;And according to
Depth information generates depth image.
Specifically, compared with non-modulated structure light, the phase information of the structure light after modulation is changed, and is being tied
The structure light showed in structure light image is to generate the structure light after distortion, wherein, the phase information of change can characterize
The depth information of object.Therefore, structure light video camera head 122 demodulates phase corresponding to each pixel in structure light image and believed first
Breath, calculates depth information, so as to obtain final depth image further according to phase information.
In order that those skilled in the art be more apparent from according to structure light come gather active user face or
The process of the depth image of body, illustrated below by taking a kind of widely used optical grating projection technology (fringe projection technology) as an example
Its concrete principle.Wherein, optical grating projection technology belongs to sensu lato area-structure light.
As shown in Fig. 6 (a), when being projected using area-structure light, sine streak is produced by computer programming first,
And sine streak is projected to measured object by structured light projector 121, recycle structure light video camera head 122 to shoot striped by thing
Degree of crook after body modulation, then demodulates the curved stripes and obtains phase, then phase is converted into depth information to obtain
Depth image.The problem of to avoid producing error or error coupler, needed before carrying out depth information collection using structure light to depth
Image collection assembly 12 carries out parameter calibration, and demarcation includes geometric parameter (for example, structure light video camera head 122 and project structured light
Relative position parameter between device 121 etc.) demarcation, the inner parameter and structured light projector 121 of structure light video camera head 122
The demarcation of inner parameter etc..
Specifically, the first step, computer programming produce sine streak.Need to obtain using the striped of distortion due to follow-up
Phase, for example phase is obtained using four step phase-shifting methods, therefore produce four width phase differences here and beStriped, then structure light throw
Emitter 121 projects the four spokes line timesharing on measured object (mask shown in Fig. 6 (a)), and structure light video camera head 122 collects
Such as the figure on Fig. 6 (b) left sides, while to read the striped of the plane of reference shown on the right of Fig. 6 (b).
Second step, carry out phase recovery.The bar graph that structure light video camera head 122 is modulated according to four width collected is (i.e.
Structure light image) to calculate the phase diagram by phase modulation, now obtained be to block phase diagram.Because four step Phase-shifting algorithms obtain
Result be that gained is calculated by arctan function, therefore the phase after structure light modulation is limited between [- π, π], that is,
Say, the phase after modulation exceedes [- π, π], and it can restart again.Shown in the phase main value such as Fig. 6 (c) finally given.
Wherein, it is necessary to carry out the saltus step processing that disappears, it is continuous phase that will block phase recovery during phase recovery is carried out
Position.As shown in Fig. 6 (d), the left side is the continuous phase bitmap modulated, and the right is to refer to continuous phase bitmap.
3rd step, subtract each other to obtain phase difference (i.e. phase information) by the continuous phase modulated and with reference to continuous phase, should
Phase difference characterizes depth information of the measured object with respect to the plane of reference, then phase difference is substituted into the conversion formula (public affairs of phase and depth
The parameter being related in formula is by demarcation), you can obtain the threedimensional model of the object under test as shown in Fig. 6 (e).
It should be appreciated that in actual applications, according to the difference of concrete application scene, employed in the embodiment of the present invention
Structure light in addition to above-mentioned grating, can also be other arbitrary graphic patterns.
As a kind of possible implementation, the depth information of pattern light progress active user also can be used in the present invention
Collection.
Specifically, the method that pattern light obtains depth information is that this spreads out using a diffraction element for being essentially flat board
The relief diffraction structure that there are element particular phases to be distributed is penetrated, cross section is with two or more concavo-convex step embossment knots
Structure.Substantially 1 micron of the thickness of substrate in diffraction element, each step it is highly non-uniform, the span of height can be 0.7
Micron~0.9 micron.Structure shown in Fig. 7 (a) is the local diffraction structure of the collimation beam splitting element of the present embodiment.Fig. 7 (b) is edge
The unit of the cross sectional side view of section A-A, abscissa and ordinate is micron.The speckle pattern of pattern photogenerated has
The randomness of height, and can with the difference of distance changing patterns.Therefore, depth information is being obtained using pattern light
Before, it is necessary first to the speckle pattern in space is calibrated, for example, in the range of 0~4 meter of distance structure light video camera head 122,
A reference planes are taken every 1 centimetre, then just save 400 width speckle images after demarcating, the spacing of demarcation is smaller, obtains
Depth information precision it is higher.Then, structured light projector 121 is by pattern light projection to measured object (i.e. active user)
On, the speckle pattern that the difference in height on measured object surface to project the pattern light on measured object changes.Structure light
Camera 122 is shot project speckle pattern (i.e. structure light image) on measured object after, then by speckle pattern and demarcation early stage
The 400 width speckle images preserved afterwards carry out computing cross-correlation one by one, and then obtain 400 width correlation chart pictures.Measured object in space
Position where body can show peak value on correlation chart picture, above-mentioned peak value is superimposed and after interpolation arithmetic i.e.
It can obtain the depth information of measured object.
Most diffraction lights are obtained after diffraction is carried out to light beam due to common diffraction element, but per beam diffraction light light intensity difference
Greatly, it is also big to the risk of human eye injury.Re-diffraction even is carried out to diffraction light, the uniformity of obtained light beam is relatively low.
Therefore, the effect projected using the light beam of common diffraction element diffraction to measured object is poor.Using collimation in the present embodiment
Beam splitting element, the element not only have the function that to collimate uncollimated rays, also have the function that light splitting, i.e., through speculum
The non-collimated light of reflection is emitted multi-beam collimation light beam, and the multi-beam collimation being emitted after collimating beam splitting element toward different angles
The area of section approximately equal of light beam, flux of energy approximately equal, and then to carry out using the scatterplot light after the beam diffraction
The effect of projection is more preferable.Meanwhile laser emitting light is dispersed to every light beam, the risk of injury human eye is reduce further, and dissipate
Spot structure light is for other uniform structure lights of arrangement, when reaching same collection effect, the consumption of pattern light
Electricity is lower.
Referring to Fig. 8, in some embodiments, step 104 is according to the multiframe depth image and dynamic of active user
The depth data of dynamic object in each two field picture of video, determine the step of the situation of change of distance between active user and dynamic object
Suddenly can specifically include:
S1041, the multiframe depth image according to active user, determine the moving direction and displacement of active user.
S1042, the depth data according to dynamic object in each two field picture of dynamic video, determine the moving direction of dynamic object
And displacement.
S1043, moving direction and displacement according to active user, and the moving direction of dynamic object and it is mobile away from
From determining the situation of change of distance between active user and dynamic object.
Referring again to Fig. 2, in some embodiments, step 1041, step 1042 and step 1043 can be by handling
Device 20 is realized.
In other words, processor 20 can be further used for the multiframe depth image according to active user, determine active user
Moving direction and displacement, according to the depth data of dynamic object in each two field picture of dynamic video, determine dynamic object
Moving direction and displacement, and moving direction and displacement according to active user, and the mobile side of dynamic object
To and displacement, determine the situation of change of distance between active user and dynamic object.
Specifically, processor 20 can be determined in multiframe depth image, currently according to the multiframe depth image of active user
User is relative to depth image acquisition component, or the positional information relative to other objects of reference;According in each frame depth image
The positional information of active user, determine the moving direction and displacement of active user.Processor 20 can also be according to dynamic vision
Frequently in each two field picture dynamic object depth data, determine dynamic object relative to depth image acquisition component, or relative to
The positional information of other objects of reference;According to the positional information of dynamic object, the moving direction and displacement of dynamic object are determined.
And then according to the moving direction and displacement of the moving direction and displacement of active user, and dynamic object, it is determined that working as
The situation of change of distance between preceding user and dynamic object, that is, it is to become big also to determine the distance between active user and dynamic object
It is to diminish, active user is also to be proximate to dynamic object away from dynamic object.
Also referring to Fig. 3 and Fig. 9, embodiment of the present invention also proposes a kind of electronic installation 1000.Electronic installation 1000
Including video volume adjusting means 100.Video volume adjusting means 100 can utilize hardware and/or software to realize.Video volume
Adjusting means 100 includes imaging device 10 and processor 20.
Imaging device 10 includes visible image capturing first 11 and depth image acquisition component 12.
Specifically, it is seen that light video camera head 11 includes imaging sensor 111 and lens 112.Wherein, imaging sensor 111 wraps
Color filter lens array (such as Bayer filter arrays) is included, the number of lens 112 can be one or more.In imaging sensor 111
Each imaging pixel senses luminous intensity and wavelength information in photographed scene, generates one group of raw image data;Image
Sensor 111 sends this group of raw image data into processor 20, and processor 20 carries out denoising to raw image data, inserted
The image of colour is obtained after the computings such as value;Processor 20 can be in various formats to each image pixel in raw image data
Handle one by one, for example, each image pixel there can be a bit depth of 8,10,12 or 14 bits, processor 20 can be by identical or not
Same bit depth is handled each image pixel.
Depth image acquisition component 12 includes structured light projector 121 and structure light video camera head 122, depth image collection group
The depth information that part 12 can be used for catching active user is to obtain depth image.Structured light projector 121 is used to throw structure light
Active user is incident upon, wherein, structured light patterns can be the speckle of laser stripe, Gray code, sine streak or random alignment
Pattern etc..Structure light video camera head 122 includes imaging sensor 1221 and lens 1222, and the number of lens 1222 can be one or more
It is individual.Imaging sensor 1221 is used for the structure light image that capturing structure light projector 121 is projected on active user.Structure light figure
As can be sent by depth acquisition component 12 to processor 20 be demodulated, the processing such as phase recovery, phase information calculate to be to obtain
The depth information of active user.
In some embodiments, it is seen that the function of light video camera head 11 and structure light video camera head 122 can be by a camera
Realize, in other words, imaging device 10 only includes a camera and a structured light projector 121, and above-mentioned camera is not only
Structure light image can also be shot with photographed scene image.
Except using structure light obtain depth image in addition to, can also by binocular vision method, based on differential time of flight (Time
Of Flight, TOF) even depth obtains the depth image of active user as acquisition methods.
Processor 20 is further used for obtaining dynamic video embedded in 3D backgrounds, and multiframe depth corresponding to dynamic video
Spend image;According to multiframe depth image, each two field picture in dynamic video is handled, obtains dynamic object in each two field picture
Depth data;According to the depth number of dynamic object in each two field picture of multiframe depth image and dynamic video of active user
According to determining the situation of change of distance between active user and dynamic object;According to distance between active user and dynamic object
Situation of change, the volume of dynamic video is adjusted.
In addition, video volume adjusting means 100 also includes video memory 30.Video memory 30 can be embedded in electronics dress
Put in 1000 or independently of the memory outside electronic installation 1000, and may include direct memory access (DMA) (Direct
Memory Access, DMA) feature.What the view data or depth image acquisition component 12 of first 11 collection of visible image capturing gathered
Structure light image related data, which can transmit, to be stored or is cached into video memory 30.Processor 20 can store from image
Raw image data is read in device 30, also structure light image related data can be read from video memory 30 handle
To depth image.In addition, view data and depth image are also storable in video memory 30, device 20 for processing is adjusted at any time
With processing.
Video volume adjusting means 100 may also include display 50.Display 50 can be directly displayed by volume adjusting
Dynamic video is watched for user, or by graphics engine or graphics processor (Graphics Processing Unit, GPU)
It is further processed.Video volume adjusting means 100 also includes encoder/decoder 60, and encoder/decoder 60 can be compiled
The view data of depth image etc. is decoded, the view data of coding can be stored in video memory 30, and can be in image
By decoder decompresses to be shown before being shown on display 50.Encoder/decoder 60 can be by central processing unit
(Central Processing Unit, CPU), GPU or coprocessor are realized.In other words, encoder/decoder 60 can be
Any one or more in central processing unit (Central Processing Unit, CPU), GPU and coprocessor.
Video volume adjusting means 100 also includes control logic device 40.For imaging device 10 in imaging, processor 20 can root
The data obtained according to imaging device are analyzed to determine one or more control parameters of imaging device 10 (for example, during exposure
Between etc.) image statistics.Image statistics are sent to control logic device 40, control logic device 40 and controlled by processor 20
Imaging device 10 is imaged with the control parameter determined.Control logic device 40 may include to perform one or more routines (such as
Firmware) processor and/or microcontroller.One or more routines can determine imaging device according to the image statistics of reception
10 control parameter.
Referring to Fig. 10, the electronic installation 1000 of embodiment of the present invention includes one or more processors 200, memory
300 and one or more programs 310.Wherein one or more programs 310 are stored in memory 300, and are configured to
Performed by one or more processors 200.The video volume that program 310 includes being used to perform above-mentioned any one embodiment is adjusted
The instruction of section method.
For example, program 310 includes being used for the instruction for performing the video volume adjusting method described in following steps:
Obtain dynamic video embedded in 3D backgrounds, and multiframe depth image corresponding to the dynamic video;
According to the multiframe depth image, each two field picture in the dynamic video is handled, obtains each two field picture
The depth data of middle dynamic object;
Obtain the multiframe depth image of active user;
According to the depth of dynamic object in the multiframe depth image of the active user and each two field picture of the dynamic video
Degrees of data, determine the situation of change of distance between the active user and the dynamic object;
According to the situation of change of distance between the active user and the dynamic object, to the volume of the dynamic video
It is adjusted.
For another example program 310 also includes being used for the instruction for performing the video volume adjusting method described in following steps:
To active user's projective structure light;
The multiframe structure light image that shooting is modulated through the active user;
Phase information corresponding to each pixel of the multiframe structure light image is demodulated to obtain the multiframe depth image.
The computer-readable recording medium of embodiment of the present invention includes being combined with the electronic installation 1000 that can be imaged making
Computer program.Computer program can be performed by processor 200 to complete the video sound of above-mentioned any one embodiment
Amount adjustment method.
For example, computer program can be performed by processor 200 to complete the video volume adjusting method described in following steps:
For each frame structure light image in the multiframe structure light image, each pixel in the structure light image is demodulated
Corresponding phase information;
The phase information is converted into depth information;With
The depth image is generated according to the depth information.
Adjusted for another example computer program can also be performed by processor 200 with completing the video volume described in following steps
Method;
According to the multiframe depth image of the active user, the moving direction and displacement of the active user are determined;
According to the depth data of dynamic object in each two field picture of the dynamic video, the mobile side of the dynamic object is determined
To and displacement;
According to the moving direction and displacement of the active user, and the moving direction of the dynamic object and movement
Distance, determine the situation of change of distance between the active user and the dynamic object.
In the description of this specification, reference term " one embodiment ", " some embodiments ", " example ", " specifically show
The description of example " or " some examples " etc. means specific features, structure, material or the spy for combining the embodiment or example description
Point is contained at least one embodiment or example of the present invention.In this manual, to the schematic representation of above-mentioned term not
Identical embodiment or example must be directed to.Moreover, specific features, structure, material or the feature of description can be with office
Combined in an appropriate manner in one or more embodiments or example.In addition, in the case of not conflicting, the skill of this area
Art personnel can be tied the different embodiments or example and the feature of different embodiments or example described in this specification
Close and combine.
In addition, term " first ", " second " are only used for describing purpose, and it is not intended that instruction or hint relative importance
Or the implicit quantity for indicating indicated technical characteristic.Thus, define " first ", the feature of " second " can be expressed or
Implicitly include at least one this feature.In the description of the invention, " multiple " are meant that at least two, such as two, three
It is individual etc., unless otherwise specifically defined.
Any process or method described otherwise above description in flow chart or herein is construed as, and represents to include
Module, fragment or the portion of the code of the executable instruction of one or more the step of being used to realize specific logical function or process
Point, and the scope of the preferred embodiment of the present invention includes other realization, wherein can not press shown or discuss suitable
Sequence, including according to involved function by it is basic simultaneously in the way of or in the opposite order, carry out perform function, this should be of the invention
Embodiment person of ordinary skill in the field understood.
Expression or logic and/or step described otherwise above herein in flow charts, for example, being considered use
In the order list for the executable instruction for realizing logic function, may be embodied in any computer-readable medium, for
Instruction execution system, device or equipment (such as computer based system including the system of processor or other can be held from instruction
The system of row system, device or equipment instruction fetch and execute instruction) use, or combine these instruction execution systems, device or set
It is standby and use.For the purpose of this specification, " computer-readable medium " can any can be included, store, communicate, propagate or pass
Defeated program is for instruction execution system, device or equipment or the dress used with reference to these instruction execution systems, device or equipment
Put.The more specifically example (non-exhaustive list) of computer-readable medium includes following:Electricity with one or more wiring
Connecting portion (electronic installation), portable computer diskette box (magnetic device), random access memory (RAM), read-only storage
(ROM), erasable edit read-only storage (EPROM or flash memory), fiber device, and portable optic disk is read-only deposits
Reservoir (CDROM).In addition, computer-readable medium, which can even is that, to print the paper of described program thereon or other are suitable
Medium, because can then enter edlin, interpretation or if necessary with it for example by carrying out optical scanner to paper or other media
His suitable method is handled electronically to obtain described program, is then stored in computer storage.
It should be appreciated that each several part of the present invention can be realized with hardware, software, firmware or combinations thereof.Above-mentioned
In embodiment, software that multiple steps or method can be performed in memory and by suitable instruction execution system with storage
Or firmware is realized.If, and in another embodiment, can be with well known in the art for example, realized with hardware
Any one of row technology or their combination are realized:With the logic gates for realizing logic function to data-signal
Discrete logic, have suitable combinational logic gate circuit application specific integrated circuit, programmable gate array (PGA), scene
Programmable gate array (FPGA) etc..
Those skilled in the art are appreciated that to realize all or part of step that above-described embodiment method carries
Suddenly it is that by program the hardware of correlation can be instructed to complete, described program can be stored in a kind of computer-readable storage medium
In matter, the program upon execution, including one or a combination set of the step of embodiment of the method.
In addition, each functional unit in each embodiment of the present invention can be integrated in a processing module, can also
That unit is individually physically present, can also two or more units be integrated in a module.Above-mentioned integrated mould
Block can both be realized in the form of hardware, can also be realized in the form of software function module.The integrated module is such as
Fruit is realized in the form of software function module and as independent production marketing or in use, can also be stored in a computer
In read/write memory medium.
Storage medium mentioned above can be read-only storage, disk or CD etc..Although have been shown and retouch above
Embodiments of the invention are stated, it is to be understood that above-described embodiment is exemplary, it is impossible to be interpreted as the limit to the present invention
System, one of ordinary skill in the art can be changed to above-described embodiment, change, replace and become within the scope of the invention
Type.
Claims (14)
- A kind of 1. video volume adjusting method, it is characterised in that including:Obtain dynamic video embedded in 3D backgrounds, and multiframe depth image corresponding to the dynamic video;According to the multiframe depth image, each two field picture in the dynamic video is handled, obtains in each two field picture and moves The depth data of state object;Obtain the multiframe depth image of active user;According to the depth number of dynamic object in the multiframe depth image of the active user and each two field picture of the dynamic video According to determining the situation of change of distance between the active user and the dynamic object;According to the situation of change of distance between the active user and the dynamic object, the volume of the dynamic video is carried out Regulation.
- 2. according to the method for claim 1, it is characterised in that the multiframe depth image for obtaining the active user, Including:To active user's projective structure light;The multiframe structure light image that shooting is modulated through the active user;Phase information corresponding to each pixel of the multiframe structure light image is demodulated to obtain the multiframe depth image.
- 3. according to the method for claim 2, it is characterised in that each pixel of the demodulation multiframe structure light image Corresponding phase information to obtain the multiframe depth image, including:For each frame structure light image in the multiframe structure light image, it is corresponding to demodulate each pixel in the structure light image Phase information;The phase information is converted into depth information;WithThe depth image is generated according to the depth information.
- 4. according to the method for claim 1, it is characterised in that it is described according to the multiframe depth image, to the dynamic Each two field picture in video is handled, and obtains the depth data of dynamic object in each two field picture, including:Each two field picture in the dynamic video is identified, obtains the dynamic object region in each two field picture;Depth data corresponding with the dynamic object region is obtained from depth image corresponding to each two field picture.
- 5. according to the method for claim 1, it is characterised in that the multiframe depth image according to the active user with And in each two field picture of dynamic video dynamic object depth data, determine between the active user and the dynamic object The situation of change of distance, including:According to the multiframe depth image of the active user, the moving direction and displacement of the active user are determined;According to the depth data of dynamic object in each two field picture of the dynamic video, determine the dynamic object moving direction and Displacement;According to the moving direction and displacement of the active user, and the moving direction of the dynamic object and it is mobile away from From determining the situation of change of distance between the active user and the dynamic object.
- 6. according to the method for claim 1, it is characterised in that it is described according to the active user and the dynamic object it Between distance situation of change, the volume of the dynamic video is adjusted, including:If the distance between the active user and the dynamic object increase, reduce the volume of the dynamic video;If the distance between the active user and the dynamic object reduce, increase the volume of the dynamic video.
- A kind of 7. video volume adjusting means, it is characterised in that including:Processor, the processor are used for, and obtain dynamic video embedded in 3D backgrounds, and corresponding to the dynamic video it is more Frame depth image;According to the multiframe depth image, each two field picture in the dynamic video is handled, obtains in each two field picture and moves The depth data of state object;Depth image acquisition component, the depth image acquisition component are used for the multiframe depth image for obtaining active user;The processor is additionally operable to, according in the multiframe depth image of the active user and each two field picture of the dynamic video The depth data of dynamic object, determine the situation of change of distance between the active user and the dynamic object;According to the situation of change of distance between the active user and the dynamic object, the volume of the dynamic video is carried out Regulation.
- 8. device according to claim 7, it is characterised in that the depth image acquisition component includes structured light projector With structure light video camera head, the structured light projector is used for active user's projective structure light;The structure light video camera head is used for,The multiframe structure light image that shooting is modulated through the active user;Phase information corresponding to each pixel of the multiframe structure light image is demodulated to obtain the multiframe depth image.
- 9. device according to claim 8, it is characterised in that the structure light video camera head is additionally operable to,For each frame structure light image in the multiframe structure light image, it is corresponding to demodulate each pixel in the structure light image Phase information;The phase information is converted into depth information;WithThe depth image is generated according to the depth information.
- 10. device according to claim 7, it is characterised in that the processor is additionally operable to,Each two field picture in the dynamic video is identified, obtains the dynamic object region in each two field picture;Depth data corresponding with the dynamic object region is obtained from depth image corresponding to each two field picture.
- 11. device according to claim 7, it is characterised in that the processor is additionally operable to,According to the multiframe depth image of the active user, the moving direction and displacement of the active user are determined;According to the depth data of dynamic object in each two field picture of the dynamic video, determine the dynamic object moving direction and Displacement;According to the moving direction and displacement of the active user, and the moving direction of the dynamic object and it is mobile away from From determining the situation of change of distance between the active user and the dynamic object.
- 12. device according to claim 7, it is characterised in that the processor is additionally operable to,When distance between the active user and the dynamic object increases, reduce the volume of the dynamic video;When distance between the active user and the dynamic object reduces, increase the volume of the dynamic video.
- 13. a kind of electronic installation, it is characterised in that the electronic installation includes:One or more processors;Memory;WithOne or more programs, wherein one or more of programs are stored in the memory, and be configured to by One or more of computing devices, described program include being used for the video sound described in perform claim 1 to 6 any one of requirement The instruction of amount adjustment method.
- A kind of 14. computer-readable recording medium, it is characterised in that the meter being used in combination including the electronic installation with that can image Calculation machine program, the computer program can be executed by processor to complete the video volume described in claim 1 to 6 any one Adjusting method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710812122.4A CN107613383A (en) | 2017-09-11 | 2017-09-11 | Video volume adjusting method, device and electronic installation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710812122.4A CN107613383A (en) | 2017-09-11 | 2017-09-11 | Video volume adjusting method, device and electronic installation |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107613383A true CN107613383A (en) | 2018-01-19 |
Family
ID=61062149
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710812122.4A Pending CN107613383A (en) | 2017-09-11 | 2017-09-11 | Video volume adjusting method, device and electronic installation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107613383A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110427887A (en) * | 2019-08-02 | 2019-11-08 | 腾讯科技(深圳)有限公司 | A kind of membership's recognition methods and device based on intelligence |
CN110535735A (en) * | 2019-08-15 | 2019-12-03 | 青岛海尔科技有限公司 | Playback equipment control method and device based on Internet of Things operating system |
CN111930336A (en) * | 2020-07-29 | 2020-11-13 | 歌尔科技有限公司 | Volume adjusting method and device of audio device and storage medium |
CN112651566A (en) * | 2020-12-30 | 2021-04-13 | 湖南虹康规划勘测咨询有限公司 | Comprehensive improvement and evaluation analysis method, storage medium, terminal and system for global land |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102467234A (en) * | 2010-11-12 | 2012-05-23 | Lg电子株式会社 | Method for providing display image in multimedia device and multimedia device thereof |
CN105933845A (en) * | 2010-03-19 | 2016-09-07 | 三星电子株式会社 | Method and apparatus for reproducing three-dimensional sound |
-
2017
- 2017-09-11 CN CN201710812122.4A patent/CN107613383A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105933845A (en) * | 2010-03-19 | 2016-09-07 | 三星电子株式会社 | Method and apparatus for reproducing three-dimensional sound |
CN102467234A (en) * | 2010-11-12 | 2012-05-23 | Lg电子株式会社 | Method for providing display image in multimedia device and multimedia device thereof |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110427887A (en) * | 2019-08-02 | 2019-11-08 | 腾讯科技(深圳)有限公司 | A kind of membership's recognition methods and device based on intelligence |
CN110427887B (en) * | 2019-08-02 | 2023-03-10 | 腾讯科技(深圳)有限公司 | Member identity identification method and device based on intelligence |
CN110535735A (en) * | 2019-08-15 | 2019-12-03 | 青岛海尔科技有限公司 | Playback equipment control method and device based on Internet of Things operating system |
CN111930336A (en) * | 2020-07-29 | 2020-11-13 | 歌尔科技有限公司 | Volume adjusting method and device of audio device and storage medium |
CN112651566A (en) * | 2020-12-30 | 2021-04-13 | 湖南虹康规划勘测咨询有限公司 | Comprehensive improvement and evaluation analysis method, storage medium, terminal and system for global land |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107797664A (en) | Content display method, device and electronic installation | |
CN107807806A (en) | Display parameters method of adjustment, device and electronic installation | |
CN107742296A (en) | Dynamic image generation method and electronic installation | |
CN107610077A (en) | Image processing method and device, electronic installation and computer-readable recording medium | |
CN107995434A (en) | Image acquiring method, electronic device and computer-readable recording medium | |
CN107707831A (en) | Image processing method and device, electronic installation and computer-readable recording medium | |
CN107509045A (en) | Image processing method and device, electronic installation and computer-readable recording medium | |
CN107707835A (en) | Image processing method and device, electronic installation and computer-readable recording medium | |
CN107613383A (en) | Video volume adjusting method, device and electronic installation | |
CN107734267A (en) | Image processing method and device | |
CN107509043A (en) | Image processing method and device | |
CN107707838A (en) | Image processing method and device | |
CN107734264A (en) | Image processing method and device | |
CN107610078A (en) | Image processing method and device | |
CN107644440A (en) | Image processing method and device, electronic installation and computer-readable recording medium | |
CN107705278A (en) | The adding method and terminal device of dynamic effect | |
CN107527335A (en) | Image processing method and device, electronic installation and computer-readable recording medium | |
CN107610127A (en) | Image processing method, device, electronic installation and computer-readable recording medium | |
CN107610076A (en) | Image processing method and device, electronic installation and computer-readable recording medium | |
CN107682740A (en) | Composite tone method and electronic installation in video | |
CN107454336A (en) | Image processing method and device, electronic installation and computer-readable recording medium | |
CN107682656A (en) | Background image processing method, electronic equipment and computer-readable recording medium | |
CN107592491A (en) | Video communication background display methods and device | |
CN107613228A (en) | The adding method and terminal device of virtual dress ornament | |
CN107613223A (en) | Image processing method and device, electronic installation and computer-readable recording medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information |
Address after: 523860 No. 18, Wu Sha Beach Road, Changan Town, Dongguan, Guangdong Applicant after: OPPO Guangdong Mobile Communications Co., Ltd. Address before: 523860 No. 18, Wu Sha Beach Road, Changan Town, Dongguan, Guangdong Applicant before: Guangdong OPPO Mobile Communications Co., Ltd. |
|
CB02 | Change of applicant information | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180119 |
|
RJ01 | Rejection of invention patent application after publication |