CN107623817A - video background processing method, device and mobile terminal - Google Patents

video background processing method, device and mobile terminal Download PDF

Info

Publication number
CN107623817A
CN107623817A CN201710813311.3A CN201710813311A CN107623817A CN 107623817 A CN107623817 A CN 107623817A CN 201710813311 A CN201710813311 A CN 201710813311A CN 107623817 A CN107623817 A CN 107623817A
Authority
CN
China
Prior art keywords
user
background
image
video
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710813311.3A
Other languages
Chinese (zh)
Other versions
CN107623817B (en
Inventor
张学勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201710813311.3A priority Critical patent/CN107623817B/en
Publication of CN107623817A publication Critical patent/CN107623817A/en
Application granted granted Critical
Publication of CN107623817B publication Critical patent/CN107623817B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

This application discloses a kind of video background processing method, device and mobile terminal, wherein, this method includes:When it is determined that three user in addition to second user in the current video picture for the second user conversed with the first user video be present, the three-dimensional background image of scene where obtaining the first user, and obtain the depth image of the first user;Three-dimensional background image and depth image are handled to extract people object area of first user in three-dimensional background image, and the background area in three-dimensional background image is determined according to people's object area;The privacy area in background area is identified, and virtualization processing is carried out to privacy area, to generate the new video background of the first user.Thus, in video call process, by carrying out virtualization processing to the privacy area in video background so that the 3rd user can not watch the first user privacy information in the scene, it is compromised to avoid individual subscriber privacy, improves user experience.

Description

Video background processing method, device and mobile terminal
Technical field
The application is related to communication technical field, more particularly to a kind of video background processing method, device and mobile terminal.
Background technology
With the development of scientific and technological level, the function of the terminal such as mobile phone, tablet personal computer is become stronger day by day.It is for example, increasing Terminal is configured with camera, and user can shoot photo, video recording, Video chat etc. by camera.
The process of Video chat is carried out by camera and other side in user, can not only show that user draws in video pictures Face, the picture of scene where user can be also shown, by some individual privacies that user may be related in the scene, therefore, During video, usual user only wants to the picture of scene where other side sees oneself, and is not intended to other unrelated users and sees To oneself privacy information in the scene.
However, during two-party video, when user has found that user needs hand when having other people in other side's video pictures Dynamic to close video calling, for a user, inconvenient for operation, if turned off not in time, video pictures may be seen .Therefore, when how to avoid video, the individual privacy information of user is compromised, the personal secrets for protecting user, improves and uses Family is experienced, significant.
The content of the invention
The purpose of the application is intended at least solve one of above-mentioned technical problem to a certain extent.
Therefore, first purpose of the application is to propose a kind of video background processing method, this method is in video calling During, by carrying out virtualization processing to the privacy area in video background so that the 3rd user can not watch the first user institute Privacy information in the scene, it is compromised to avoid individual subscriber privacy, improves user experience.
Second purpose of the application is to propose a kind of video background processing unit.
The 3rd purpose of the application is to propose a kind of computer-readable recording medium.
The 4th purpose of the application is to propose a kind of mobile terminal.
The 5th purpose of the application is to propose a kind of computer program.
The video background processing method of the application first aspect embodiment, including:Obtain and the call of the first user video The current video picture of second user;It is determined that the 3rd use in addition to the second user in the current video picture be present During family, the three-dimensional background image of scene where obtaining first user, and obtain the depth image of first user;Processing The three-dimensional background image and the depth image are to extract personage area of first user in the three-dimensional background image Domain, and the background area in the three-dimensional background image is determined according to people's object area;Identify hidden in the background area Private region, and virtualization processing is carried out to the privacy area, to generate the new video background of first user.
According to the video background processing method of the embodiment of the present application, in the first user and the process of second user video calling In, when it is determined that three users be present in second user video pictures, the three-dimensional background image of scene where obtaining the first user, And the depth image of the first user is obtained, then, the depth image of three-dimensional background image and the first user is handled to extract first People object area of the user in three-dimensional background image, and background area, and the identification background area are determined according to people's object area Privacy area in domain, and virtualization processing is carried out to the privacy area, to generate the new video background of first user.By This, in video call process, by carrying out virtualization processing to the privacy area in video background so that the 3rd user can not see See the first user privacy information in the scene, it is compromised to avoid individual subscriber privacy, improves user experience.
The video background processing unit of the application second aspect embodiment, including:First acquisition module, for obtaining and the The current video picture of the second user of one user video call;Image capture module, for it is determined that the current video is drawn When three user in addition to the second user in face be present, the three-dimensional background figure of scene where obtaining first user Picture, and obtain the depth image of first user;First processing module, for handling the three-dimensional background image and the depth Degree image is determined with extracting people's object area of first user in the three-dimensional background image according to people's object area Background area in the three-dimensional background image;Second processing module, for identifying the privacy area in the background area, and Virtualization processing is carried out to the privacy area, to generate the new video background of first user.
According to the video background processing unit of the embodiment of the present application, in the first user and the process of second user video calling In, when it is determined that three users be present in second user video pictures, the three-dimensional background image of scene where obtaining the first user, And the depth image of the first user is obtained, then, the depth image of three-dimensional background image and the first user is handled to extract first People object area of the user in three-dimensional background image, and background area, and the identification background area are determined according to people's object area Privacy area in domain, and virtualization processing is carried out to the privacy area, to generate the new video background of first user.By This, in video call process, by carrying out virtualization processing to the privacy area in video background so that the 3rd user can not see See the first user privacy information in the scene, it is compromised to avoid individual subscriber privacy, improves user experience.
The application third aspect embodiment provides one or more non-volatile meters for including computer executable instructions Calculation machine readable storage medium storing program for executing, when the computer executable instructions are executed by one or more processors so that the processing Device performs the video background processing method of the application first aspect embodiment.
The mobile terminal of the application fourth aspect embodiment, the mobile terminal includes memory and processor, described to deposit Computer-readable instruction is stored in reservoir, when the instruction is by the computing device so that this Shen of the computing device Please first aspect embodiment video background processing method.
According to the mobile terminal of the embodiment of the present application, during the first user and second user video calling, true When determining to exist in second user video pictures three users, the three-dimensional background image of scene where obtaining the first user, and obtain The depth image of first user, then, the depth image for handling three-dimensional background image and the first user are existed with extracting the first user People's object area in three-dimensional background image, and determined according to people's object area in background area, and the identification background area Privacy area, and virtualization processing is carried out to the privacy area, to generate the new video background of first user.Thus, exist In video call process, by carrying out virtualization processing to the privacy area in video background so that the 3rd user can not watch One user privacy information in the scene, it is compromised to avoid individual subscriber privacy, improves user experience.
The aspect embodiment of the application the 5th provides a kind of computer program product, when in the computer program product When instruction processing unit performs, the video background processing method of the application first aspect embodiment is performed.
The aspect and advantage that the application adds will be set forth in part in the description, and will partly become from the following description Obtain substantially, or recognized by the practice of the application.
Brief description of the drawings
The above-mentioned and/or additional aspect of the application and advantage will become from the following description of the accompanying drawings of embodiments Substantially and it is readily appreciated that, wherein,
Fig. 1 is the flow chart according to the video background processing method of the application one embodiment;
Fig. 2 is the refined flow chart for obtaining the depth image of the first user;
Fig. 3 is phase information corresponding to each pixel of demodulation structure light image to obtain the depth image of the first user Refined flow chart;
Fig. 4 (a) to Fig. 4 (e) is the schematic diagram of a scenario according to the structural light measurement of the application one embodiment;
Fig. 5 (a) and Fig. 5 (b) is the schematic diagram of a scenario according to the structural light measurement of the application one embodiment;
Fig. 6 is to handle the depth image of three-dimensional background image and the first user to extract the first user in three-dimensional background image In people's object area refined flow chart;
Fig. 7 is the flow chart according to the video background processing method of the application another embodiment;
Fig. 8 is the structural representation according to the video background processing unit of the embodiment of the present application of the application one embodiment Figure;
Fig. 9 is the structural representation according to the video background processing unit of the embodiment of the present application of the application another embodiment Figure;
Figure 10 is shown according to the structure of the video background processing unit of the embodiment of the present application of the application another embodiment It is intended to;
Figure 11 is shown according to the structure of the video background processing unit of the embodiment of the present application of the application further embodiment It is intended to;
Figure 12 is the schematic diagram according to the image processing circuit of the application one embodiment.
Embodiment
Embodiments herein is described below in detail, the example of the embodiment is shown in the drawings, wherein from beginning to end Same or similar label represents same or similar element or the element with same or like function.Below with reference to attached The embodiment of figure description is exemplary, it is intended to for explaining the application, and it is not intended that limitation to the application.
Below with reference to the accompanying drawings the video background processing method, device, mobile terminal and computer of the embodiment of the present application are described Readable storage medium storing program for executing.
Fig. 1 is the flow chart according to the video background processing method of the application one embodiment.The video back of the body of the embodiment The application of scape processing method is in the terminal.Wherein, terminal, which can include mobile phone, tablet personal computer, intelligent wearable equipment etc., has respectively The hardware device of kind operating system.As shown in figure 1, the video background processing method comprises the following steps:
S11, obtain the current video picture with the second user of the first user video call.
Specifically, during the first user and second user video calling, the terminal of the first user is shown from video The current video picture of second user is obtained in interface.
For example, the first user is user A, second user is user B, during user A and user's B video callings, User A terminal obtains the current video picture of user B in video display interface.
S12, when it is determined that three user in addition to second user in current video picture be present, obtain the first user institute In the three-dimensional background image of scene, and obtain the depth image of the first user.
As a kind of exemplary embodiment, after the current video picture of second user is obtained, can be known by face Other mode determines to whether there is the 3rd user except second user in current video picture.Specifically, current video can be drawn Face carries out recognition of face, to obtain the face recognition result of current video picture, and works as forward sight according to face recognition result judgement Whether other faces remove second user face before are included in frequency picture, if it is judged that being included in current video picture except the Other faces before two user's faces, it is determined that the 3rd user in addition to second user in current video picture be present.
In order to accurately obtain the first user place three-dimensional background image of scene and the depth image of the first user, as one The exemplary embodiment of kind, the three-dimensional background image of scene where the first user can be obtained by structure light, and obtain first The depth image of user.
Wherein, for three-dimensional background image for that can be coloured image, the depth image of the first user includes the first user's Depth information.The scene domain of three-dimensional background image is consistent with the scene domain of depth image, and each in three-dimensional background image Individual pixel can be found in depth image to should pixel depth information.
, can be to scene simulation structure light where the first user as a kind of exemplary embodiment, and obtain first and use The structure light image of scene where family, and the structure light image of scene according to where the first user obtains scene where the first user Depth image, and depth image according to scene where the first user and color information generate the first user where scene Three-dimensional background image.
, can be with as shown in Fig. 2 obtain the process of the depth image of the first user as a kind of exemplary embodiment Including:
S21, to first user's projective structure light.
S22, shoot the structure light image modulated through the first user.
S23, phase information corresponding to each pixel of demodulation structure light image is to obtain the depth image of the first user.
Specifically, during the first user is by terminal and second user video, the knot in the terminal of the first user Structure light projector can be to first user's transmittance structure light, and then, the structure light video camera head in terminal can be shot to be adjusted through the first user The structure light image of system, and phase information corresponding to each pixel of demodulation structure light image is to obtain the depth of the first user Image.
Specifically, structured light projector is by the face and body of the project structured light of certain pattern to the first user Afterwards, the structure light image that can be formed after being modulated by the first user in the face of the first user and the surface of body.Structure light images Structure light image after head shooting is modulated, then structure light image is demodulated to obtain the depth image of the first user.
Wherein, the pattern of structure light can be laser stripe, Gray code, sine streak, non-homogeneous speckle etc..
As a kind of exemplary embodiment, as shown in figure 3, phase corresponding to each pixel of demodulation structure light image Information can be included with obtaining the process of the depth image of the first user:
S31, phase information corresponding to each pixel in demodulation structure light image.
S32, phase information is converted into depth information.
S33, the depth image of the first user is generated according to depth information.
Specifically, compared with non-modulated structure light, the phase information of the structure light after modulation is changed, and is being tied The structure light showed in structure light image is to generate the structure light after distortion, wherein, the phase information of change can characterize The depth information of object.Therefore, structure light video camera head demodulates phase information corresponding to each pixel in structure light image first, Depth information is calculated further according to phase information, so as to obtain the depth image of the first user.
In order that those skilled in the art be more apparent from according to structure light come gather the first user face and The process of the depth image of body, illustrated below by taking a kind of widely used optical grating projection technology (fringe projection technology) as an example Its concrete principle.Wherein, optical grating projection technology belongs to sensu lato area-structure light.
As shown in Fig. 4 (a), when being projected using area-structure light, sine streak is produced by computer programming first, And sine streak is projected to measured object by structured light projector, recycle structure light video camera head shooting striped to be modulated by object Degree of crook afterwards, then demodulate the curved stripes and obtain phase, then phase is converted into depth information to obtain depth map Picture.The problem of to avoid producing error or error coupler, need to image structure light before carrying out depth information collection using structure light Head carries out parameter calibration with structured light projector, and demarcation includes geometric parameter (for example, structure light video camera head and structured light projector Between relative position parameter etc.) demarcation, the inner parameter of structure light video camera head and the inner parameter of structured light projector Demarcation etc..
Specifically, the first step, computer programming produce sine streak.Need to obtain using the striped of distortion due to follow-up Phase, for example phase is obtained using four step phase-shifting methods, therefore produce four width phase differences here and beStriped, then structure light throw Emitter projects the four spokes line timesharing on measured object (mask shown in Fig. 4 (a)), and structure light video camera head is collected such as Fig. 4 (b) figure on the left side, while to read the striped of the plane of reference shown on the right of Fig. 4 (b).
Second step, carry out phase recovery.Bar graph (the i.e. structure that structure light video camera head is modulated according to four width collected Light image) to calculate the phase diagram by phase modulation, now obtained be to block phase diagram.Because the knot that four step Phase-shifting algorithms obtain Fruit is to calculate gained by arctan function, therefore the phase after structure light modulation is limited between [- π, π], that is to say, that every Phase after modulation exceedes [- π, π], and it can restart again.Shown in the phase main value such as Fig. 4 (c) finally given.
Wherein, it is necessary to carry out the saltus step processing that disappears, it is continuous phase that will block phase recovery during phase recovery is carried out Position.As shown in Fig. 4 (d), the left side is the continuous phase bitmap modulated, and the right is to refer to continuous phase bitmap.
3rd step, subtract each other to obtain phase difference (i.e. phase information) by the continuous phase modulated and with reference to continuous phase, should Phase difference characterizes depth information of the measured object with respect to the plane of reference, then phase difference is substituted into the conversion formula (public affairs of phase and depth The parameter being related in formula is by demarcation), you can obtain the threedimensional model of the object under test as shown in Fig. 4 (e).
It should be appreciated that in actual applications, according to the difference of concrete application scene, employed in the embodiment of the present application Structure light in addition to above-mentioned grating, can also be other arbitrary graphic patterns.
As a kind of possible implementation, the depth information of pattern light the first user of progress also can be used in the application Collection.
Specifically, the method that pattern light obtains depth information is that this spreads out using a diffraction element for being essentially flat board The relief diffraction structure that there are element particular phases to be distributed is penetrated, cross section is with two or more concavo-convex step embossment knots Structure.Substantially 1 micron of the thickness of substrate in diffraction element, each step it is highly non-uniform, the span of height can be 0.5 Micron~0.9 micron.Structure shown in Fig. 5 (a) is the local diffraction structure of the collimation beam splitting element of the present embodiment.Fig. 5 (b) is edge The unit of the cross sectional side view of section A-A, abscissa and ordinate is micron.The speckle pattern of pattern photogenerated has The randomness of height, and can with the difference of distance changing patterns.Therefore, depth information is being obtained using pattern light Before, it is necessary first to the speckle pattern in space is calibrated, for example, in the range of 0~4 meter of distance structure light video camera head, often A reference planes are taken every 1 centimetre, then just save 400 width speckle images after demarcating, the spacing of demarcation is smaller, acquisition The precision of depth information is higher.Then, structured light projector is by pattern light projection to measured object (i.e. the first user), quilt The speckle pattern that the difference in height on survey thing surface to project the pattern light on measured object changes.Structure light video camera head After shooting projects the speckle pattern (i.e. structure light image) on measured object, then preserved after speckle pattern and early stage are demarcated 400 width speckle images carry out computing cross-correlation one by one, and then obtain 400 width correlation chart pictures.In space where testee Position can show peak value on correlation chart picture, above-mentioned peak value is superimposed and can obtain after interpolation arithmetic by Survey the depth information of thing.
Most diffraction lights are obtained after diffraction is carried out to light beam due to common diffraction element, but per beam diffraction light light intensity difference Greatly, it is also big to the risk of human eye injury.Re-diffraction even is carried out to diffraction light, the uniformity of obtained light beam is relatively low. Therefore, the effect projected using the light beam of common diffraction element diffraction to measured object is poor.Using collimation in the present embodiment Beam splitting element, the element not only have the function that to collimate uncollimated rays, also have the function that light splitting, i.e., through speculum The non-collimated light of reflection projects multi-beam collimation light beam, and the multi-beam collimation being emitted after collimating beam splitting element toward different angles The area of section approximately equal of light beam, flux of energy approximately equal, and then to carry out using the scatterplot light after the beam diffraction The effect of projection is more preferable.Meanwhile laser emitting light is dispersed to every light beam, the risk of injury human eye is reduce further, and dissipate Spot structure light is for other uniform structure lights of arrangement, when reaching same collection effect, the consumption of pattern light Electricity is lower.
S13, the depth image of three-dimensional background image and the first user is handled to extract the first user in three-dimensional background image In people's object area, and the background area in three-dimensional background image is determined according to people's object area.
In one embodiment of the application, as shown in fig. 6, processing three-dimensional background image and the depth image of the first user To extract the process of people object area of first user in three-dimensional background image, can include:
S61, identify the human face region in three-dimensional background image.
S62, depth information corresponding with human face region is obtained from the depth image of the first user.
S63, the depth bounds of people's object area is determined according to the depth information of human face region.
S64, the personage area for determining to be connected and fallen into depth bounds with human face region according to the depth bounds of people's object area Domain.
The human face region gone out using the deep learning Model Identification trained in three-dimensional background image, then according to three-dimensional The corresponding relation of background image and depth image can determine that the depth information of human face region.Due to human face region include nose, The features such as eyes, ear, lip, therefore, depth data of each feature corresponding in depth image in human face region is Different, for example, in face face structure light video camera head, in the depth image that structure light video camera head is shot, nose is corresponding Depth data may be smaller, and depth data corresponding to ear may be larger.Therefore, the depth information of above-mentioned human face region May be a numerical value or a number range.Wherein, when the depth information of human face region is a numerical value, the numerical value can By averaging to obtain to the depth data of human face region;Or can be by taking intermediate value to the depth data of human face region Obtain.
Because people's object area includes human face region, in other words, people's object area is in some depth together with human face region In the range of, therefore, after the depth information of human face region is determined, personage area can be set according to the depth information of human face region The depth bounds in domain, what the depth bounds extraction further according to people's object area fell into the depth bounds and was connected with human face region People's object area.
In this way, it can be determined to go out people's object area in three-dimensional background image according to depth information.Due to depth information The image for obtaining the not factor such as illumination, colour temperature in by environment rings, and therefore, identified people's object area is more accurate.
S14, the privacy area in background area is identified, and virtualization processing is carried out to privacy area, to generate the first user New video background.
, can be according to default hidden behind the background area in obtaining three-dimensional background image as a kind of illustrative embodiments Background area is identified private object, to determine the privacy area in background area.
Wherein, privacy area is the region where privacy information.
Wherein, it is the privacy object pre-set to preset privacy object, for example, default privacy object can be bed, include Photograph album of face etc..
The video background processing method that the embodiment of the present application provides, in the first user and the process of second user video calling In, when it is determined that three users be present in second user video pictures, the three-dimensional background image of scene where obtaining the first user, And the depth image of the first user is obtained, then, the depth image of three-dimensional background image and the first user is handled to extract first People object area of the user in three-dimensional background image, and determined according to people's object area in background area, and identification background area Privacy area, and virtualization processing is carried out to privacy area, to generate the new video background of the first user.Thus, lead in video During words, by carrying out virtualization processing to the privacy area in video background so that the 3rd user can not watch the first user Privacy information in the scene, it is compromised to avoid individual subscriber privacy, improves user experience.
In one embodiment of the application, in order to accurately determine in current video picture whether there is except second user it The 3rd outer user, as shown in fig. 7, this method can also include:
S71, obtain the three-dimensional face model of second user.
Wherein, three-dimensional face model is the terminal from second user by being established to second user projective structure light.
Specifically, during video calling, the terminal of the first user is by receiving the transmission of second user terminal The three-dimensional face model of second user, to obtain the three-dimensional face model of second user.
Wherein, the terminal of second user is (following that the terminal of second user is referred to as into second user end in order to facilitate describing End) process of three-dimensional face model that second user is established by structure light is:Structured light projector in second user terminal To second user projective structure, then, structure light video camera head shoots the structure light image of second user, and according to structure light image The depth image of second user is determined, afterwards, according to the depth image of second user, generates the three-dimensional face mould of second user Type.
S72, recognition of face is carried out to current video picture, to obtain the human face region in current video picture.
S73, according to the three-dimensional face model of second user, judge that the human face region in current video picture whether there is and remove Other human face regions outside the human face region of second user, if it is not, then performing step S74.
S74, determine the 3rd user for having in addition to second user in current video picture.
As a kind of exemplary embodiment, the human face region in current video picture is obtained, can determine whether to work as forward sight Whether the face characteristic information in human face region in frequency picture is matched with the three-dimensional face model of second user, if It is not, it is determined that other human face regions in addition to the human face region of second user in current video picture be present.If it is, Determine to only exist the human face region of second user in current video picture.
On the basis of above-described embodiment, in one embodiment of the application, in order to intelligently recover the original of user The video background come, avoids user from recovering the trouble of original video background manually, is generating the new video background of the first user Afterwards, when detect again only exist second user in current video picture when, the video background of the first user is switched to originally Video background.
That is, the embodiment is after the first user carries out video calling with new video background and second user, when again When only existing second user in the secondary current video picture for detecting second user, i.e. in second user video is detected When intruder (i.e. the 3rd user) leaves, the video background of the first user intelligently can be switched into original video again and carried on the back Scape.Thus, it may be such that user without recovering original video background manually, improves user experience.
On the basis of above-described embodiment, in one embodiment of the application, it is rapidly switched in order to facilitate user Video background originally, after the new video background of the first user is generated, only existed when detecting again in current video picture During second user, display whether to switch to the video background of the first user into the prompting of original video background recovery video background Information, and when receiving the confirmation instruction of recovery video background of the first user, the video background of the first user is switched to Video background originally.
In order to realize above-described embodiment, the application also proposed a kind of video background processing unit of the embodiment of the present application.
Fig. 8 is the structural representation according to the video background processing unit of the embodiment of the present application of the application one embodiment Figure.
As shown in figure 8, the video background processing unit of the embodiment of the present application can include the first acquisition module 110, figure As acquisition module 120, first processing module 130 and Second processing module 140, wherein:
First acquisition module 110 is used to obtain the current video picture with the second user of the first user video call.
Image capture module 120 is used for it is determined that the 3rd user in addition to second user in current video picture be present When, the three-dimensional background image of scene where obtaining the first user, and obtain the depth image of the first user.
First processing module 130 is used to handle the depth image of three-dimensional background image and the first user to extract the first user People's object area in three-dimensional background image, and the background area in three-dimensional background image is determined according to people's object area.
Second processing module 140 is used to identify the privacy area in background area, and carries out virtualization processing to privacy area, To generate the new video background of the first user.
In one embodiment of the application, on the basis of shown in Fig. 8, as shown in figure 9, the image capture module 120 Structured light projector 121 and structure light video camera head 122 can be included, wherein:
Structured light projector 121 is used for first user's projective structure light.
Structure light video camera head 122 is used to shoot the structure light image modulated through the first user;And demodulation structure light image Each pixel corresponding to phase information to obtain the depth image of the first user.
As a kind of exemplary embodiment, structure light video camera head 122 is specifically used for:Adjust each picture in structure light image Phase information corresponding to element, and phase information is converted into depth information, and the depth of the first user is generated according to depth information Spend image.
In one embodiment of the application, first processing module 130 is specifically used for:Identify the people in three-dimensional background image Face region;Depth information corresponding with human face region is obtained from the depth image of the first user;According to the depth of human face region Information determines the depth bounds of people's object area;Determine to be connected with human face region according to the depth bounds of people's object area and fall into depth In the range of people's object area.
In one embodiment of the application, in order to accurately determine in the current video picture of second user with the presence or absence of the Three users, on the basis of shown in Fig. 8, as shown in Figure 10, the device can also include the second acquisition module 150, identification module 160 and judge module 170, wherein:
Second acquisition module 150 is used for the three-dimensional face model for obtaining second user.
Wherein, three-dimensional face model is the terminal from second user by being established to second user projective structure light.
Identification module 160 is used to carry out recognition of face to current video picture, to obtain the face in current video picture Region.
Judge module 170 is used for the three-dimensional face model according to second user, judges the face area in current video picture Domain is with the presence or absence of other human face regions in addition to the human face region of second user.
As a kind of exemplary embodiment, judge module 170 is specifically used for judging the face in current video picture Whether the face characteristic information in region is matched with the three-dimensional face model of second user, if it is not, then determining to work as Other human face regions in addition to the human face region of second user in preceding video pictures be present.If it is, determine current video The human face region of second user is only existed in picture.
Wherein, the human face region that image capture module 120 is additionally operable in current video picture is judged, which exists, removes second During other human face regions outside the human face region of user, it is determined that the in addition to second user in current video picture be present Three users, and the three-dimensional background image of the first user place scene is obtained, and obtain the depth image of first user.
, wherein it is desired to illustrate, the second acquisition module 150, identification mould in the device embodiment shown in above-mentioned Figure 10 The structure of block 160 and judge module 170 is further included in the device embodiment of earlier figures 9, and this application is not construed as limiting.
In one embodiment of the application, video background is switched to original video background in order to facilitate user, On the basis of shown in Fig. 8, as shown in figure 11, the device can also include:
3rd processing module 180 be used for when detect again only exist second user in current video picture when, by first The video background of user switches to original video background;Or second is only existed in current video picture when detecting again During user, the prompting for displaying whether to switch to the video background of the first user into original video background recovery video background is believed Breath, and when receiving the confirmation instruction of recovery video background of the first user, the video background of the first user is switched into original The video background come.
, wherein it is desired to illustrate, the structure of the 3rd processing module 180 in the device embodiment shown in above-mentioned Figure 11 is also It may be embodied in earlier figures 9- Figure 10 device embodiment, this application be not construed as limiting.
, wherein it is desired to explanation, the foregoing explanation to video background processing method embodiment are also applied for the reality The video background processing unit of example is applied, its realization principle is similar, and here is omitted.
According to the video background processing unit of the embodiment of the present application, in the first user and the process of second user video calling In, when it is determined that three users be present in second user video pictures, the three-dimensional background image of scene where obtaining the first user, And the depth image of the first user is obtained, then, the depth image of three-dimensional background image and the first user is handled to extract first People object area of the user in three-dimensional background image, and determined according to people's object area in background area, and identification background area Privacy area, and virtualization processing is carried out to privacy area, to generate the new video background of the first user.Thus, lead in video During words, by carrying out virtualization processing to the privacy area in video background so that the 3rd user can not watch the first user Privacy information in the scene, it is compromised to avoid individual subscriber privacy, improves user experience.
In order to realize above-described embodiment, the application also proposes a kind of mobile terminal.
A kind of mobile terminal, include the video background processing unit of the application second aspect embodiment.
According to the mobile terminal of the embodiment of the present application, during the first user and second user video calling, true When determining to exist in second user video pictures three users, the three-dimensional background image of scene where obtaining the first user, and obtain The depth image of first user, then, the depth image for handling three-dimensional background image and the first user are existed with extracting the first user People's object area in three-dimensional background image, and the privacy in background area, and identification background area is determined according to people's object area Region, and virtualization processing is carried out to privacy area, to generate the new video background of the first user.Thus, in video call process In, by carrying out virtualization processing to the privacy area in video background so that it is on the scene that the 3rd user can not watch the first user institute Privacy information in scape, it is compromised to avoid individual subscriber privacy, improves user experience.
The embodiment of the present application additionally provides a kind of computer-readable recording medium, and one or more can perform comprising computer The non-volatile computer readable storage medium storing program for executing of instruction, when computer executable instructions are executed by one or more processors, So that the video background processing method that computing device is foregoing.
In order to realize above-described embodiment, the application also proposes a kind of mobile terminal.
Above-mentioned mobile terminal includes image processing circuit, and image processing circuit can utilize hardware and/or component software Realize, it may include define the various processing units of ISP (Image Signal Processing, picture signal processing) pipeline.Figure 12 be the schematic diagram according to the image processing circuit of the application one embodiment.As shown in figure 12, for purposes of illustration only, only show with The various aspects of the related image processing techniques of the embodiment of the present application.
As shown in figure 12, the image processing circuit of mobile terminal 1200 includes imaging device 10, ISP processors 30 and control Logic device 40.Imaging device 10 may include image capture module 120.
Specifically, image capture module 120 can include structured light projector 121 and structure light video camera head 122.Structure light The projector 121 is by scene where structured light projection to the first user and the first user.Wherein, the structured light patterns can be laser Striped, Gray code, sine streak or, speckle pattern of random alignment etc..Structure light video camera head 122 can include image and pass Sensor 1221 and lens 1222.Wherein, the number of lens 1222 can be one or more.Imaging sensor 1221, which is used to catch, to be tied Structure light projector 121 is projected to the structure light image on the first user.Structure light image can be sent by image capture module 120 to The processing such as ISP processors 30 are demodulated, phase recovery, phase information calculate are to obtain the depth information of the first user.
Wherein, above-mentioned imaging sensor 1221 be additionally operable to capturing structure light projector 121 be projected to the first user institute it is on the scene Structure light image in scape on testee, and structure light image is sent to ISP processors 30, by ISP processors 30 to knot Structure light image is demodulated the depth information for obtaining measured object.Meanwhile imaging sensor 1221 can also catch the color of measured object Multimedia message.It is of course also possible to catch the structure light image and color information of measured object respectively by two imaging sensors 1221.
Wherein, by taking pattern light as an example, ISP processors 30 are demodulated to structure light image, are specifically included, from the knot The speckle image of measured object is gathered in structure light image, by the speckle image of measured object with entering with reference to speckle image according to pre-defined algorithm Row view data calculates, and obtains each speckle point of speckle image on measured object relative to reference to the reference speckle in speckle image The displacement of point.The depth value of each speckle point of speckle image is calculated using trigonometry conversion, and according to the depth It is worth to the depth information of measured object.
It is, of course, also possible to obtain the depth image by the method for binocular vision or based on jet lag TOF method Information etc., is not limited herein, as long as can obtain or belong to this by the method for the depth information that measured object is calculated The scope that embodiment includes.
After the color information that ISP processors 30 receive the measured object that imaging sensor 1221 captures, it can be tested View data corresponding to the color information of thing is handled.ISP processors 30 are analyzed view data can be used for obtaining It is determined that and/or imaging device 10 one or more control parameters image statistics.Imaging sensor 1221 may include color Color filter array (such as Bayer filters), imaging sensor 1221 can obtain is caught with each imaging pixel of imaging sensor 1221 The luminous intensity and wavelength information caught, and the one group of raw image data that can be handled by ISP processors 30 is provided.
ISP processors 30 handle raw image data pixel by pixel in various formats.For example, each image pixel can have There is the bit depth of 8,10,12 or 14 bits, ISP processors 30 can carry out one or more image procossing behaviour to raw image data Make, image statistics of the collection on view data.Wherein, image processing operations can be by identical or different bit depth precision Carry out.
ISP processors 30 can also receive pixel data from imaging sensor 1221.Imaging sensor 1221 can be memory Independent private memory in the part of device, storage device or electronic equipment, and may include DMA (Direct Memory Access, direct direct memory access (DMA)) feature.
When receiving raw image data, ISP processors 30 can carry out one or more image processing operations.
After ISP processors 30 get color information and the depth information of measured object, it can be merged, obtain three Tie up image.Wherein, quilt accordingly can be extracted by least one of appearance profile extracting method or contour feature extracting method Survey the feature of thing.Such as pass through active shape model method ASM, active appearance models method AAM, PCA PCA, discrete remaining The methods of string converter technique DCT, the feature of measured object is extracted, is not limited herein.To be extracted respectively from depth information again by The feature for surveying thing and the feature that measured object is extracted from color information carry out registration and Fusion Features processing.What is herein referred to melts Conjunction processing can be that the feature that will be extracted in depth information and color information directly combines or by different images Identical feature combines after carrying out weight setting, it is possibility to have other amalgamation modes, finally according to the feature after fusion, generation three Tie up image.
The view data of 3-D view can be transmitted to video memory 20, to carry out other place before shown Reason.ISP processors 30 from the reception processing data of video memory 20, and to processing data carry out original domain in and RGB and Image real time transfer in YCbCr color spaces.The view data of 3-D view may be output to display 60, so that user watches And/or further handled by graphics engine or GPU (Graphics Processing Unit, graphics processor).In addition, ISP The output of processor 30 also can be transmitted to video memory 20, and display 60 can read view data from video memory 20. In one embodiment, video memory 20 can be configured as realizing one or more frame buffers.In addition, ISP processors 30 Output can be transmitted to encoder/decoder 50, so as to encoding/decoding image data.The view data of coding can be saved, and Decompressed before being shown in the equipment of display 60.Encoder/decoder 50 can be realized by CPU or GPU or coprocessor.
The image statistics that ISP processors 30 determine, which can be transmitted, gives the unit of control logic device 40.Control logic device 40 can Processor and/or microcontroller including performing one or more routines (such as firmware), one or more routines can be according to reception Image statistics, determine the control parameter of imaging device 10.
It it is below the step of realizing video background processing method with image processing techniques in Figure 12:
S1', obtain the current video picture with the second user of the first user video call;
S2', when it is determined that three user in addition to second user in current video picture be present, obtain the first user institute In the three-dimensional background image of scene, and obtain the depth image of the first user;
S3', three-dimensional background image and depth image are handled to extract personage area of first user in three-dimensional background image Domain, and the background area in three-dimensional background image is determined according to people's object area;
S4', the privacy area in background area is identified, and virtualization processing is carried out to privacy area, to generate the first user New video background.
, wherein it is desired to explanation, the foregoing explanation to video background processing method embodiment also use the implementation The mobile terminal of example, its realization principle is similar, and here is omitted.
A kind of computer program product, when the instruction processing unit in computer program product performs, perform foregoing regard Frequency background process method.
In the description of this specification, reference term " one embodiment ", " some embodiments ", " example ", " specifically show The description of example " or " some examples " etc. means specific features, structure, material or the spy for combining the embodiment or example description Point is contained at least one embodiment or example of the application.In this manual, to the schematic representation of above-mentioned term not Identical embodiment or example must be directed to.Moreover, specific features, structure, material or the feature of description can be with office Combined in an appropriate manner in one or more embodiments or example.In addition, in the case of not conflicting, the skill of this area Art personnel can be tied the different embodiments or example and the feature of different embodiments or example described in this specification Close and combine.
In addition, term " first ", " second " are only used for describing purpose, and it is not intended that instruction or hint relative importance Or the implicit quantity for indicating indicated technical characteristic.Thus, define " first ", the feature of " second " can be expressed or Implicitly include at least one this feature.In the description of the present application, " multiple " are meant that at least two, such as two, three It is individual etc., unless otherwise specifically defined.
Any process or method described otherwise above description in flow chart or herein is construed as, and represents to include Module, fragment or the portion of the code of the executable instruction of one or more the step of being used to realize specific logical function or process Point, and the scope of the preferred embodiment of the application includes other realization, wherein can not press shown or discuss suitable Sequence, including according to involved function by it is basic simultaneously in the way of or in the opposite order, carry out perform function, this should be by the application Embodiment person of ordinary skill in the field understood.
Expression or logic and/or step described otherwise above herein in flow charts, for example, being considered use In the order list for the executable instruction for realizing logic function, may be embodied in any computer-readable medium, for Instruction execution system, device or equipment (such as computer based system including the system of processor or other can be held from instruction The system of row system, device or equipment instruction fetch and execute instruction) use, or combine these instruction execution systems, device or set It is standby and use.For the purpose of this specification, " computer-readable medium " can any can be included, store, communicate, propagate or pass Defeated program is for instruction execution system, device or equipment or the dress used with reference to these instruction execution systems, device or equipment Put.The more specifically example (non-exhaustive list) of computer-readable medium includes following:Electricity with one or more wiring Connecting portion (electronic installation), portable computer diskette box (magnetic device), random access memory (RAM), read-only storage (ROM), erasable edit read-only storage (EPROM or flash memory), fiber device, and portable optic disk is read-only deposits Reservoir (CDROM).In addition, computer-readable medium can even is that can the paper of print routine thereon or other suitable be situated between Matter, because can then enter edlin, interpretation or if necessary with other for example by carrying out optical scanner to paper or other media Suitable method is handled electronically to obtain program, is then stored in computer storage.
It should be appreciated that each several part of the application can be realized with hardware, software, firmware or combinations thereof.Above-mentioned In embodiment, software that multiple steps or method can be performed in memory and by suitable instruction execution system with storage Or firmware is realized.If, and in another embodiment, can be with well known in the art for example, realized with hardware Any one of row technology or their combination are realized:With the logic gates for realizing logic function to data-signal Discrete logic, have suitable combinational logic gate circuit application specific integrated circuit, programmable gate array (PGA), scene Programmable gate array (FPGA) etc..
Those skilled in the art are appreciated that to realize all or part of step that above-described embodiment method carries Suddenly be can by program come instruct correlation hardware complete, program can be stored in a kind of computer-readable recording medium In, the program upon execution, including one or a combination set of the step of embodiment of the method.
In addition, each functional unit in each embodiment of the application can be integrated in a processing module, can also That unit is individually physically present, can also two or more units be integrated in a module.Above-mentioned integrated mould Block can both be realized in the form of hardware, can also be realized in the form of software function module.If integrated module with The form of software function module realize and be used as independent production marketing or in use, can also be stored in one it is computer-readable Take in storage medium.
Storage medium mentioned above can be read-only storage, disk or CD etc..Although have been shown and retouch above Embodiments herein is stated, it is to be understood that above-described embodiment is exemplary, it is impossible to be interpreted as the limit to the application System, one of ordinary skill in the art can be changed to above-described embodiment, change, replace and become within the scope of application Type.

Claims (12)

1. a kind of video background processing method, it is characterised in that the described method comprises the following steps:
Obtain the current video picture with the second user of the first user video call;
When it is determined that three user in addition to the second user in the current video picture be present, obtain described first and use The three-dimensional background image of scene where family, and obtain the depth image of first user;
The three-dimensional background image and the depth image are handled to extract first user in the three-dimensional background image People's object area, and the background area in the three-dimensional background image is determined according to people's object area;
The privacy area in the background area is identified, and virtualization processing is carried out to the privacy area, to generate described first The new video background of user.
2. the method as described in claim 1, it is characterised in that the depth image for obtaining first user, including:
To the first user projective structure light;
The structure light image that shooting is modulated through first user;And
Phase information corresponding to each pixel of the structure light image is demodulated to obtain the depth image of first user.
3. the method as described in claim 1, it is characterised in that the processing three-dimensional background image and first user Depth image to extract people object area of first user in the three-dimensional background image, including:
Identify the human face region in the three-dimensional background image;
Depth information corresponding with the human face region is obtained from the depth image of first user;
The depth bounds of people's object area is determined according to the depth information of the human face region;
The people for determining to be connected and fall into the depth bounds with the human face region according to the depth bounds of people's object area Object area.
4. the method as described in claim 1, it is characterised in that methods described also includes:
The three-dimensional face model of the second user is obtained, wherein, the three-dimensional face model is led to by the terminal of second user Cross to the second user projective structure light and establish;
Recognition of face is carried out to the current video picture, to obtain the human face region in the current video picture;
According to the three-dimensional face model of the second user, judge that the human face region in the current video picture whether there is and remove Other human face regions outside the human face region of the second user;
If judging there is its in addition to the human face region of the second user in the human face region in the current video picture His human face region, it is determined that the 3rd user in addition to the second user in the current video picture be present, and perform institute The step of stating the three-dimensional background image of scene where obtaining first user.
5. the method as described in claim any one of 1-4, it is characterised in that in the new video of generation first user After background, methods described also includes:
When detect again only exist the second user in current video picture when, the video background of first user is cut Shift to original video background;Or
When detect again only exist the second user in current video picture when, display whether regarding first user Frequency background switches to the prompt message that original video background recovers video background, and in the recovery for receiving first user During the confirmation instruction of video background, the video background of first user is switched into original video background.
A kind of 6. video background processing unit, it is characterised in that including:
First acquisition module, for obtaining the current video picture with the second user of the first user video call;
Image capture module, for it is determined that the 3rd user in addition to the second user in the current video picture be present When, the three-dimensional background image of scene where obtaining first user, and obtain the depth image of first user;
First processing module, for handling the three-dimensional background image and the depth image to extract first user in institute People's object area in three-dimensional background image is stated, and the background area in the three-dimensional background image is determined according to people's object area Domain;
Second processing module, carried out for identifying the privacy area in the background area, and to the privacy area at virtualization Reason, to generate the new video background of first user.
7. device as claimed in claim 6, it is characterised in that described image acquisition module includes structured light projector and structure Light video camera head, wherein:
The structured light projector, for the first user projective structure light;
Structure light video camera head, for shooting the structure light image modulated through first user;And the demodulation structure light figure As each pixel corresponding to phase information to obtain the depth image of first user.
8. device as claimed in claim 6, it is characterised in that the first processing module, be specifically used for:
Identify the human face region in the three-dimensional background image;
Depth information corresponding with the human face region is obtained from the depth image of first user;
The depth bounds of people's object area is determined according to the depth information of the human face region;
The people for determining to be connected and fall into the depth bounds with the human face region according to the depth bounds of people's object area Object area.
9. device as claimed in claim 6, it is characterised in that described device also includes:
Second acquisition module, for obtaining the three-dimensional face model of the second user, wherein, the three-dimensional face model be by The terminal of second user to the second user projective structure light by establishing;
Identification module, for carrying out recognition of face to the current video picture, to obtain the people in the current video picture Face region;
Judge module, for the three-dimensional face model according to the second user, judge the face in the current video picture Region is with the presence or absence of other human face regions in addition to the human face region of the second user;
Wherein, described image acquisition module, it is additionally operable to human face region in the current video picture is judged and exists remove institute When stating other human face regions outside the human face region of second user, it is determined that exist in the current video picture except described the The 3rd user outside two users, and perform obtain first user where scene three-dimensional background image the step of.
10. the device as described in claim any one of 6-9, it is characterised in that described device also includes:
3rd processing module, for when detect again only exist the second user in current video picture when, by described The video background of one user switches to original video background;Or institute is only existed in current video picture when detecting again When stating second user, display whether that the video background of first user is switched into original video background recovers video background Prompt message, and receive first user recovery video background confirmation instruction when, by first user's Video background switches to original video background.
11. one or more includes the non-volatile computer readable storage medium storing program for executing of computer executable instructions, when the calculating When machine executable instruction is executed by one or more processors so that the computing device such as any one of claim 1 to 5 Described video background processing method.
12. a kind of mobile terminal, including memory and processor, computer-readable instruction is stored in the memory, it is described When instruction is by the computing device so that video background of the computing device as any one of claim 1 to 5 Processing method.
CN201710813311.3A 2017-09-11 2017-09-11 Video background processing method, device and mobile terminal Expired - Fee Related CN107623817B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710813311.3A CN107623817B (en) 2017-09-11 2017-09-11 Video background processing method, device and mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710813311.3A CN107623817B (en) 2017-09-11 2017-09-11 Video background processing method, device and mobile terminal

Publications (2)

Publication Number Publication Date
CN107623817A true CN107623817A (en) 2018-01-23
CN107623817B CN107623817B (en) 2019-08-20

Family

ID=61089572

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710813311.3A Expired - Fee Related CN107623817B (en) 2017-09-11 2017-09-11 Video background processing method, device and mobile terminal

Country Status (1)

Country Link
CN (1) CN107623817B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109379571A (en) * 2018-12-13 2019-02-22 移康智能科技(上海)股份有限公司 A kind of implementation method and intelligent peephole of intelligent peephole
CN110060205A (en) * 2019-05-08 2019-07-26 北京迈格威科技有限公司 Image processing method and device, storage medium and electronic equipment
CN110175950A (en) * 2018-10-24 2019-08-27 广东小天才科技有限公司 A kind of method for secret protection and wearable device based on wearable device
CN110443744A (en) * 2018-05-03 2019-11-12 安讯士有限公司 The method, apparatus and system of fog-level for image data to be applied to
CN110502974A (en) * 2019-07-05 2019-11-26 深圳壹账通智能科技有限公司 A kind of methods of exhibiting of video image, device, equipment and readable storage medium storing program for executing
CN111246093A (en) * 2020-01-16 2020-06-05 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment
CN111626924A (en) * 2020-05-28 2020-09-04 维沃移动通信有限公司 Image blurring processing method and device, electronic equipment and readable storage medium
CN112204945A (en) * 2019-08-14 2021-01-08 深圳市大疆创新科技有限公司 Image processing method, image processing apparatus, image capturing device, movable platform, and storage medium
CN115760986A (en) * 2022-11-30 2023-03-07 北京中环高科环境治理有限公司 Image processing method and device based on neural network model
US11823426B2 (en) 2020-05-15 2023-11-21 Koninklijke Philips N.V. Ambient light suppression

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101510957A (en) * 2008-02-15 2009-08-19 索尼株式会社 Image processing device, camera device, communication system, image processing method, and program
CN102663810A (en) * 2012-03-09 2012-09-12 北京航空航天大学 Full-automatic modeling approach of three dimensional faces based on phase deviation scanning
CN104951773A (en) * 2015-07-12 2015-09-30 上海微桥电子科技有限公司 Real-time face recognizing and monitoring system
CN106358069A (en) * 2016-10-31 2017-01-25 维沃移动通信有限公司 Video data processing method and mobile terminal
CN106878588A (en) * 2017-02-27 2017-06-20 努比亚技术有限公司 A kind of video background blurs terminal and method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101510957A (en) * 2008-02-15 2009-08-19 索尼株式会社 Image processing device, camera device, communication system, image processing method, and program
CN102663810A (en) * 2012-03-09 2012-09-12 北京航空航天大学 Full-automatic modeling approach of three dimensional faces based on phase deviation scanning
CN104951773A (en) * 2015-07-12 2015-09-30 上海微桥电子科技有限公司 Real-time face recognizing and monitoring system
CN106358069A (en) * 2016-10-31 2017-01-25 维沃移动通信有限公司 Video data processing method and mobile terminal
CN106878588A (en) * 2017-02-27 2017-06-20 努比亚技术有限公司 A kind of video background blurs terminal and method

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110443744A (en) * 2018-05-03 2019-11-12 安讯士有限公司 The method, apparatus and system of fog-level for image data to be applied to
CN110175950A (en) * 2018-10-24 2019-08-27 广东小天才科技有限公司 A kind of method for secret protection and wearable device based on wearable device
CN109379571A (en) * 2018-12-13 2019-02-22 移康智能科技(上海)股份有限公司 A kind of implementation method and intelligent peephole of intelligent peephole
CN110060205B (en) * 2019-05-08 2023-08-08 北京迈格威科技有限公司 Image processing method and device, storage medium and electronic equipment
CN110060205A (en) * 2019-05-08 2019-07-26 北京迈格威科技有限公司 Image processing method and device, storage medium and electronic equipment
CN110502974A (en) * 2019-07-05 2019-11-26 深圳壹账通智能科技有限公司 A kind of methods of exhibiting of video image, device, equipment and readable storage medium storing program for executing
CN112204945A (en) * 2019-08-14 2021-01-08 深圳市大疆创新科技有限公司 Image processing method, image processing apparatus, image capturing device, movable platform, and storage medium
CN111246093A (en) * 2020-01-16 2020-06-05 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment
CN111246093B (en) * 2020-01-16 2021-07-20 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment
US11823426B2 (en) 2020-05-15 2023-11-21 Koninklijke Philips N.V. Ambient light suppression
CN111626924A (en) * 2020-05-28 2020-09-04 维沃移动通信有限公司 Image blurring processing method and device, electronic equipment and readable storage medium
CN111626924B (en) * 2020-05-28 2023-08-15 维沃移动通信有限公司 Image blurring processing method and device, electronic equipment and readable storage medium
CN115760986B (en) * 2022-11-30 2023-07-25 北京中环高科环境治理有限公司 Image processing method and device based on neural network model
CN115760986A (en) * 2022-11-30 2023-03-07 北京中环高科环境治理有限公司 Image processing method and device based on neural network model

Also Published As

Publication number Publication date
CN107623817B (en) 2019-08-20

Similar Documents

Publication Publication Date Title
CN107623817B (en) Video background processing method, device and mobile terminal
CN107623832A (en) Video background replacement method, device and mobile terminal
CN107592490A (en) Video background replacement method, device and mobile terminal
CN107734267B (en) Image processing method and device
CN107682607A (en) Image acquiring method, device, mobile terminal and storage medium
CN107529096A (en) Image processing method and device
CN107493428A (en) Filming control method and device
CN107610077A (en) Image processing method and device, electronic installation and computer-readable recording medium
WO2019047985A1 (en) Image processing method and device, electronic device, and computer-readable storage medium
CN107734264B (en) Image processing method and device
CN107509045A (en) Image processing method and device, electronic installation and computer-readable recording medium
CN107707835A (en) Image processing method and device, electronic installation and computer-readable recording medium
CN107707831A (en) Image processing method and device, electronic installation and computer-readable recording medium
CN107895110A (en) Unlocking method, device and the mobile terminal of terminal device
CN107493427A (en) Focusing method, device and the mobile terminal of mobile terminal
CN107707838A (en) Image processing method and device
CN107509043A (en) Image processing method and device
CN107610078A (en) Image processing method and device
CN107705278A (en) The adding method and terminal device of dynamic effect
CN108052813A (en) Unlocking method, device and the mobile terminal of terminal device
CN107613239B (en) Video communication background display method and device
CN107592491B (en) Video communication background display method and device
CN107682656B (en) Background image processing method, electronic device, and computer-readable storage medium
CN107622496A (en) Image processing method and device
CN107613228A (en) The adding method and terminal device of virtual dress ornament

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant after: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd.

Address before: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant before: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd.

GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20190820