CN108833881A - Construct the method and device of image depth information - Google Patents

Construct the method and device of image depth information Download PDF

Info

Publication number
CN108833881A
CN108833881A CN201810619716.8A CN201810619716A CN108833881A CN 108833881 A CN108833881 A CN 108833881A CN 201810619716 A CN201810619716 A CN 201810619716A CN 108833881 A CN108833881 A CN 108833881A
Authority
CN
China
Prior art keywords
depth information
region
key point
addition
target object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810619716.8A
Other languages
Chinese (zh)
Other versions
CN108833881B (en
Inventor
赖锦锋
庄幽文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honey Grapefruit Network Technology Shanghai Co ltd
Original Assignee
Beijing Microlive Vision Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Microlive Vision Technology Co Ltd filed Critical Beijing Microlive Vision Technology Co Ltd
Priority to CN201810619716.8A priority Critical patent/CN108833881B/en
Publication of CN108833881A publication Critical patent/CN108833881A/en
Priority to PCT/CN2019/073070 priority patent/WO2019237744A1/en
Application granted granted Critical
Publication of CN108833881B publication Critical patent/CN108833881B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/128Adjusting depth or disparity
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/271Image signal generators wherein the generated image signals comprise depth maps or disparity maps

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the present disclosure discloses a kind of method and device for constructing image depth information, is related to field of image processing.Wherein, the method for constructing image depth information, including:Obtain depth information relevant to target object;Label target object adds the region of depth information in advance;The depth information that will acquire is added to the region of pre- addition depth information.By the way that preset depth information is added in image, thus the image for obtaining the existing image collecting device without depth information, rich in three-dimensional sense.

Description

Construct the method and device of image depth information
Technical field
This disclosure relates to field of image processing more particularly to a kind of method and device for constructing image depth information.
Background technique
With the development of network and hardware, people are recorded life by intelligent terminal and have been widely used, nowadays, can Mobile terminal for taking pictures is also more and more, such as mobile phone, tablet computer mobile terminal, and as intelligent terminal images head portrait The continuous improvement of element, present mobile phone have been substituted traditional camera etc., become the common capture apparatus of life.And it is setting Standby, as installed various image processing softwares or plug-in unit in mobile phone, image or video to shooting carry out U.S. face or addition paster etc. Also it is used by a user extensively.
Summary of the invention
But in existing terminal or the considerations of being in cost, the camera of big multi-configuration does not have the energy of sampling depth information Power, especially front camera do not have the ability of sampling depth information more, so that the images produced lack depth letter of terminal acquisition It ceases and lacks three-dimensional sense.
In view of this, the embodiment of the present disclosure provides a kind of method and device for constructing image depth information, at least partly Solution problems of the prior art.
In a first aspect, the embodiment of the present disclosure provides a kind of method for constructing image depth information, including:
Obtain depth information relevant to target object;
Label target object adds the region of depth information in advance;
The depth information that will acquire is added to the region of pre- addition depth information.
As a kind of specific implementation of the embodiment of the present disclosure, the label target object adds the area of depth information in advance Domain, including:
Separate the foreground and background of the target object;
Extract the prospect of the target object;
The region of pre- addition depth information is marked in the prospect.
It is described that pre- addition depth is marked in the foreground information as a kind of specific implementation of the embodiment of the present disclosure The region of information, including:
Extract the key point in the prospect;
Region division is carried out to the prospect based on the key point;
The key point in the region of the pre- addition depth information of label.
It is described that pre- addition depth information is marked in the prospect as a kind of specific implementation of the embodiment of the present disclosure Region, including:
Extract the profile information of the prospect;
The region of depth information is added in advance in the profile information internal labeling.
Further include after the profile information for extracting the prospect as a kind of specific implementation of the embodiment of the present disclosure:
The profile information is smoothed.
It is described to obtain depth information relevant to target object as a kind of specific implementation of the embodiment of the present disclosure, Specially:
Depth information relevant to target object is obtained from depth information template.
As a kind of specific implementation of the embodiment of the present disclosure, it is deep that the depth information that will acquire is added to pre- addition The region of information is spent, including:
The key point for extracting the region of pre- addition depth information, is first kind key point;
The key point of depth information template is extracted, is the second class key point;
Based on first kind key point and the second class key point, the depth information in depth information template is added to pre- addition The region of depth information.
It is described crucial based on first kind key point and the second class as a kind of specific implementation of the embodiment of the present disclosure Depth information in depth information template is added to the region of pre- addition depth information by point, including:
Triangulation is carried out based on region of the first kind key point to pre- addition depth information, obtains at least one firstth area Domain;
Triangulation is carried out to depth information template based on the second class key point, obtains at least one second area;
At least one described second area is corresponded to and fits at least one described first area.
It is described crucial based on first kind key point and the second class as a kind of specific implementation of the embodiment of the present disclosure Depth information in depth information template is added to the region of pre- addition depth information by point, including:
The distance between first kind key point is calculated, obtains first distance;
The distance between second class key point is calculated, obtains second distance;
According to the ratio between first distance and second distance, depth information template is adjusted;
Depth information template adjusted is added to the region of pre- addition depth information.
Second aspect, the embodiment of the present disclosure additionally provide a kind of device for constructing image depth information, including:
Module is obtained, for obtaining depth information relevant to target object;
Zone marker module, for marking target object to add the region of depth information in advance;
Adding module, the depth information for will acquire are added to the region of pre- addition depth information.
As a kind of specific implementation of the embodiment of the present disclosure,
The zone marker module includes:
Separation module:For separating the foreground and background of the target object;
Extraction module:For extracting the prospect of the target object;
Prospect mark module:For marking the region of pre- addition depth information in the prospect.
As a kind of specific implementation of the embodiment of the present disclosure, the prospect mark module, including:
Key point extraction module:For extracting the key point in the prospect;
Region division module:For carrying out region division to the prospect based on the key point;
Key point mark module:For marking the key point in the region of pre- addition depth information.
As a kind of specific implementation of the embodiment of the present disclosure,
The prospect mark module, including:
Profile extraction module:For extracting the profile information of the prospect;
Silhouette markup module:For adding the region of depth information in advance in the profile information internal labeling.
As a kind of specific implementation of the embodiment of the present disclosure,
Further include:
Smoothing module:The profile information for extracting to the profile extraction module is smoothed.
As a kind of specific implementation of the embodiment of the present disclosure, the acquisition module is related to target object for obtaining Depth information, specially:
Depth information relevant to target object is obtained from depth information template.
As a kind of specific implementation of the embodiment of the present disclosure,
The adding module, including:
First kind key point extraction module:For extracting the key point in the region of pre- addition depth information, closed for the first kind Key point;
Second class key point extraction module:It is the second class key point for extracting the key point of depth information template;
Information adding module:For being based on first kind key point and the second class key point, by the depth in depth information template Degree information is added to the region of pre- addition depth information.
As a kind of specific implementation of the embodiment of the present disclosure, the information adding module, including:
First subdivision module:For carrying out triangulation based on region of the first kind key point to pre- addition depth information, Obtain at least one first area;
Second subdivision module:For carrying out triangulation to depth information template based on the second class key point, obtain at least One second area;
It is bonded module:At least one described first area is fitted to for corresponding at least one described second area.
As a kind of specific implementation of the embodiment of the present disclosure, the information adding module, including:
First distance computing module:For calculating the distance between first kind key point, first distance is obtained;
Second distance computing module:For calculating the distance between second class key point, second distance is obtained;
Template adjusts module:For being carried out to depth information template according to the ratio between first distance and second distance Adjustment;
Template Information adding module:For depth information template adjusted to be added to the area of pre- addition depth information Domain.
The third aspect, the embodiment of the present disclosure additionally provide a kind of electronic equipment, which includes:
At least one processor;And
The memory being connect at least one described processor communication;Wherein,
The memory is stored with the instruction that can be executed by least one described processor, and described instruction is by described at least one A processor executes, so that at least one described processor is able to carry out any building image depth information of first aspect Method.
Fourth aspect, the embodiment of the present disclosure additionally provide a kind of non-transient computer readable storage medium, the non-transient meter Calculation machine readable storage medium storing program for executing stores computer instruction, and the computer instruction is for making computer perform claim that first aspect be required to appoint The method of image depth information is constructed described in one.
Method, apparatus, electronic equipment and the non-transient computer for the building image depth information that the embodiment of the present disclosure provides Readable storage medium storing program for executing, wherein the image processing method include:Obtain depth information relevant to target object;Mark target object The region of pre- addition depth information;The depth information that will acquire is added to the region of pre- addition depth information.The embodiment of the present disclosure, By the way that preset depth information to be added in image, so that the existing image collecting device without depth information be made to obtain Image, rich in three-dimensional sense.
Above description is only the general introduction of disclosed technique scheme, in order to better understand the technological means of the disclosure, and It can be implemented in accordance with the contents of the specification, and to allow the above and other objects, features and advantages of the disclosure can be brighter Show understandable, it is special below to lift preferred embodiment, and cooperate attached drawing, detailed description are as follows.
Detailed description of the invention
It, below will be to needed in the embodiment attached in order to illustrate more clearly of the technical solution of the embodiment of the present disclosure Figure is briefly described, it should be apparent that, the accompanying drawings in the following description is only some embodiments of the present disclosure, for this field For those of ordinary skill, without creative efforts, it can also be obtained according to these attached drawings other attached drawings.
Fig. 1 is the flow chart of the method for the building image depth information that the embodiment of the present disclosure provides;
Fig. 2 is the flow chart in the region that the label target object that the embodiment of the present disclosure provides adds depth information in advance;
Fig. 3 is converted to the value of RGB color the two dimension for containing only coloration and brightness for what the embodiment of the present disclosure provided The schematic diagram of vector;
Fig. 4 is the flow chart in the region that pre- addition depth information is marked in foreground information that the embodiment of the present disclosure provides;
Fig. 5 is the schematic diagram in the Angular Point Extracting Method that the embodiment of the present disclosure provides;
Fig. 6 is the stream in the region that the depth information that will acquire that the embodiment of the present disclosure provides is added to pre- addition depth information Cheng Tu;
Fig. 7 be the embodiment of the present disclosure provide based on first kind key point and the second class key point, by depth information template In depth information be added to it is pre- addition depth information region flow chart;
Fig. 8 be the embodiment of the present disclosure provide based on first kind key point and the second class key point, by depth information template In depth information be added to it is pre- addition depth information region flow chart;
Fig. 9 is the functional block diagram of the device for the building image depth information that the embodiment of the present disclosure provides;
Figure 10 is the functional block diagram for the zone marker module that the embodiment of the present disclosure provides;
Figure 11 is the functional block diagram for the prospect mark module that the embodiment of the present disclosure provides;
Figure 12 is the functional block diagram for the prospect mark module that the embodiment of the present disclosure provides;
Figure 13 is the functional block diagram for the adding module that the embodiment of the present disclosure provides;
Figure 14 is the functional block diagram for the information adding module that the embodiment of the present disclosure provides;
Figure 15 is the functional block diagram for the information adding module that the embodiment of the present disclosure provides;
Figure 16 is the functional block diagram for the electronic equipment that the embodiment of the present disclosure provides;
Figure 17 is the schematic diagram for the computer readable storage medium that the embodiment of the present disclosure provides;
Figure 18 is the functional block diagram for the terminal that the embodiment of the present disclosure provides.
Specific embodiment
The embodiment of the present disclosure is described in detail with reference to the accompanying drawing.
It will be appreciated that illustrating embodiment of the present disclosure below by way of specific specific example, those skilled in the art can Understand other advantages and effect of the disclosure easily by content disclosed by this specification.Obviously, described embodiment is only It is only disclosure a part of the embodiment, instead of all the embodiments.The disclosure can also pass through in addition different specific implementations Mode is embodied or practiced, and the various details in this specification can also be based on different viewpoints and application, without departing from this Disclosed spirit is lower to carry out various modifications or alterations.It should be noted that in the absence of conflict, following embodiment and embodiment In feature can be combined with each other.Based on the embodiment in the disclosure, those of ordinary skill in the art are not making creativeness Every other embodiment obtained under the premise of labour belongs to the range of disclosure protection.
It should be noted that the various aspects of embodiment within the scope of the appended claims are described below.Ying Xian And be clear to, aspect described herein can be embodied in extensive diversified forms, and any specific structure described herein And/or function is only illustrative.Based on the disclosure, it will be understood by one of ordinary skill in the art that one described herein Aspect can be independently implemented with any other aspect, and can combine the two or both in these aspects or more in various ways. For example, carry out facilities and equipments in terms of any number set forth herein can be used and/or practice method.In addition, can make With other than one or more of aspect set forth herein other structures and/or it is functional implement this equipment and/or Practice the method.
It should also be noted that, diagram provided in following embodiment only illustrates the basic structure of the disclosure in a schematic way Think, component count, shape and the size when only display is with component related in the disclosure rather than according to actual implementation in schema are drawn System, when actual implementation kenel, quantity and the ratio of each component can arbitrarily change for one kind, and its assembly layout kenel can also It can be increasingly complex.
In addition, in the following description, specific details are provided for a thorough understanding of the examples.However, fields The skilled person will understand that the aspect can be practiced without these specific details.
It include depth value inside depth information, depth value specifically refers in photographed scene some imaging point to image center institute X/Y plane distance, X/Y plane refers to the plane parallel with the camera lens of camera.
The embodiment of the present disclosure provides a kind of method for constructing image depth information.As shown in Figure 1, building image depth information Method, including:
S101:Obtain depth information relevant to target object;
For common image after by image procossing, the target object (for example, face) in image has been typically free of depth Information is spent, the target image of depth information does not seem do not have stereovision and three-dimensional sense.
In order to which the target object made seems more to have a sense of hierarchy and three-dimensional sense, depth information can be increased for target object. Meanwhile newly-increased depth information should be matched with target object.
In order to increase depth information newly, need to obtain depth information relevant to target object in advance.Obtain picture depth letter Breath method is relatively more, and the image pickup method of 3D video is such as realized based on binocular camera or camera array, is based on single camera Follow shot obtains the method etc. of 3D video, can also be obtained by acquiring the camera of function with depth information.
Acquisition depth information relevant to target object, which can be, to be directly downloaded on network or obtains accordingly from local The depth information of image, can according to the user's choice or the scene of user's specific choice determines, as user can choose The depth information of face.
S102:Label target object adds the region of depth information in advance;
As to face addition depth information, but in practical application, the image of camera acquisition not only has face to user's selection Image information also has background information, and the pictorial information with the associated head of face, neck or body, but is adding It is mainly face addition depth information when depth information, and other part (such as heads or neck) accounts for smaller, then is not required to It adds depth information, or as background is relatively simple, does not also need to add it depth information, and only need to add face Depth information then just need for human face region to be marked, and requires to add for face also not all region Depth information, therefore depth information only can also be added to the regions part such as eyes, cheek or nose according to the user's choice.
S103:The depth information that will acquire is added to the region of pre- addition depth information.
It is labeled need to add the region of depth information after, the step S101 depth information obtained is added to label Region, the addition at this, refers mainly to by the way of Digital Signal Processing, and corresponding depth information is written to accordingly Region.I.e. by the depth information of the face of above-mentioned acquisition, the pixel of corresponding facial image is written.
According to another embodiment of the disclosure, as shown in Fig. 2, the region that label target object adds depth information in advance includes,
S201:Separate the foreground and background of the target object;
Foreground and background is separated, such as the photo comprising face and blue background, face and blue background are separated.
S202:Extract the prospect of the target object;
Foreground extraction after will separating above comes out, such as the prospect after extraction step S201 separation, i.e. such as face.
S203:The region of pre- addition depth information is marked in the prospect.
If only needing to add depth information in the cheek portion of face, cheek portion is marked, if necessary Nose or eye socket to face add depth information, then are accordingly marked nose or eye socket.
Foreground and background separation, even if mathematically separating foreground and background, then by the foreground picture after separation Picture extracts, and the background modeling method based on colouring information can be used and separate and extract prospect,
Background modeling method based on colouring information is specific as follows:
The value of RGB color is converted to the bivector for containing only coloration and brightness, as shown in figure 3, Ei[ER(i), EG(i), EB(i)] expectation for representing background modeling rear backdrop pixel i color, by origin O and EiLine segment OEiMutually it is referred to as the phase Hope chroma line.IiIt then indicates to want the RGB color value from the present image of background segment, in order to more preferably distinguish IiAnd Ei, done with Under conversion:
IiTo OEiThe shortest distance, i.e. IiTo OEiVertical line, this line and line segment OEiIntersect at point αiEi, αiRepresentative and background Brightness desired value compares, the brightness value of current pixel.αi=1 indicates that the brightness value of current pixel it is expected phase with background value Together.αi< 1 indicates that the angle value expectation brighter than the background of present intensity value is low.αi> 1 indicates present intensity value degree desired value brighter than the background It is high.Conveniently for statement, claim α behindiFor luminance deviation.
IiTo OEiThe shortest distance, i.e. point IiTo αiEiLine segment CDi, it is that current pixel is desired relative to background color Deviation, referred to as misalignment.
Define Ei=[uR(i), uG(i), uBIt (i)] is the color expectation of pixel i in N frame background image, si=[σR(i), σG (i), σB(i)] color standard for pixel i in N frame background image is poor.It is hereby achieved that luminance deviation αiAnd misalignment CDi, calculation formula it is as follows:
IR, IG, IBFor IiThe component in middle tri- channels RGB.
The color value of RGB color is broken down into brightness and coloration.In the luminance deviation and color of the pixel i for needing to judge It spends and applies appropriate threshold values in deviation, it will be able to determine that current pixel is prospect or background.Because RGB color is converted There was only the two-dimensional space of luminance deviation and chromaticity distortion, so preferably can judge that current pixel point is by luminance deviation No is shade.It is as follows to the specific division of pixel i:
If current pixel and background pixel are desired to have similar brightness and coloration, (luminance deviation and chromaticity distortion are one Determine in range), then pixel is generic background, is labeled as B '.
If current pixel and background pixel are desired to have similar coloration, but the brightness value of brightness ratio background pixel Low, then pixel is the shade of prospect, is labeled as S.
If current pixel and background pixel are desired to have similar coloration, but the brightness value of brightness ratio background pixel Height, then pixel is the background (this may be caused since illumination changes) with high brightness, is labeled as H.
If current pixel is desired to have different colorations from background pixel, this pixel is the prospect for needing to extract, mark It is denoted as F.
Different pixels is independent from each other in video image, therefore the Δ of different pixels pointiAnd CDi, obey different Distribution needs the α to pixel to use unified threshold values to different pixelsiAnd CDiNormalization.
In formula 3Be anti-normalized luminance deviation, in formula 4For normalized chromaticity distortion, biIt is Standard deviation in normalization.
Therefore, pixel can be classified as generic background (B '), shade (S), bright background (H) and prospect (F).By normalizing Classification formula after change is as follows:
τ in formula 5CD, τα1And τα2It is the threshold values for the decision prospect background chosen.
Model is established according to 1,2,3,4 and 5 pair of N frame background image of formula, specific step is as follows:
(a) expectation of N frame background image, standard deviation are found out;
(b) it can continue to acquire the misalignment of N frame image, luminance deviation by a;
(c) to misalignment, luminance deviation standardization;
(d) the standard color deviation and normal brightness deviation of N frame image are counted, setting determines the valve of prospect background Value, background modeling are completed;
(e) to needing the image for carrying out foreground extraction equally to acquire standard color deviation and normal brightness deviation, then root It is prospect or background according to the threshold values judgement obtained by d.
According to another embodiment of the disclosure, as shown in figure 4, the region of pre- addition depth information is marked in foreground information, Including:
S401:Key point in extraction prospect;
Key point in extraction prospect, such as the key point of face, key point be use image processing techniques to obtain can The key point of representative image feature.
S402:Region division is carried out to prospect based on key point;
Region division can be each key point and be divided into a region, be also possible to divide after combining multiple key points For a region, the center that such as eyes can acquire eyes is key point, can also be with then the key point can be a region Acquiring multiple points on eye center and edge is key point, then by multiple crucial click and sweep on the center and peripheral on eyes It is divided into a region.And so on, so as to which face is divided into multiple regions.
S403:The key point in the region of the pre- addition depth information of label.
After face is divided into multiple regions, the key point in the region for needing to add depth information is marked, such as eye Eyeball needs to add depth information, then the key point of eye areas is labeled as 1, other key points for not needing addition depth information Labeled as 0.
According to another embodiment of the disclosure, the region of pre- addition depth information is marked in foreground information, including:Before extraction The profile information of scape information.I.e. after extracting prospect, after face, using image processing algorithm, by the profile information of face It extracts, so that the positions such as nose or eyes be deleted.To exclusive PCR factor, calculation amount also can be reduced.
Contours extract, the angle using the extracting method of contour feature point, in the specific extracting method using contour feature point Point extracting method.
Angular Point Extracting Method, specially:Equipped with one centered on profile point, radius moves up in contour line for the disk of R It is dynamic, as shown in Figure 5:When disk is located at location A, area of the target and background in disk is the half of disk area;Work as circle When disk is located at B, location of C, area of the target in disk is less than half, and area of the background in disk is greater than half;Work as disk When positioned at the position D, area of the target in disk is greater than half and area of the background in disk is less than half.B, C, D in Fig. 5 Position be angle point position.It, can be just like drawing a conclusion based on above-mentioned observation:When at disk on straight line, target and background exists Area in disk is the half of disk area;When corner point is in disk, area of the target and background in disk always has One is less than the half of disk area.Corner Detection can be carried out according to this.
For difference target area and background area (without determining which is real target area, which is real background area, The two regions need to only be distinguished), Contour filling is carried out first.For being connected to target, several closure wheels can be obtained more Profile will fill contour line one by one.In algorithm specific implementation, target, background and contour line can be indicated with different numerical value, S′0(r0) it is with r0For the target in the disk of center, S 'b(r0) it is with r0For the background in the disk of center, S 'c(r0) it is with r0For Area shared by contour line in center disk then includes that area shared by the target and background of profile is:
They meet So(r0)+Sb(r0)=Sd, wherein SdIt is the area of disk, i.e., the sum put in disk.Enable Smin(r0) =min { So(r0), S 'b(r0), then when
When, by r0Mark is candidate angular, the T in formula 71It is threshold value.It enables
Then from formula 7:Only as θ (r0)≤T1When, just by r0Mark is candidate angular.If profile point approximation is seen Work is the intersection point of two straightways, then θ is the angle of this two straightway.
Disc radius R constitutes the supporting zone of measurement angle, its selection mainly considers digitlization, noise and measurement angle The precision of degree.Overwhelming majority Angular Point Extracting Method is all that curvature or two approximations are calculated using one group of adjacent profile point directly at present The angle of line segment determines angle point, since participate in calculating is profile point, so easily affected by noise.In order to reduce noise Influence, common approach be supporting zone is obtained it is larger, while carry out the biggish curve of operand (straight line) fitting.In this way Have led to that algorithm does not only adapt to small size but also operand is also larger.The algorithm of the present embodiment is existed using contour line The area surrounded in disk, quadraturing is integral operation.
Then real angle point is filtered out from a series of candidate angulars obtained by formula 7, is by non-minimum inhibition Real angle point can be filtered out, i.e., only when following formula is set up, candidate angular just becomes angle point:
What r was referred to is exactly any one point in addition to the center of circle in disk.
Then by the real angle point line filtered out to get profile information out.
According to another embodiment of the disclosure, the region of the pre- addition depth information of label in prospect, including:The wheel of extraction prospect Wide information;The region of depth information is added in advance in profile information internal labeling.After such as extracting face information in the background, it can incite somebody to action Face are deleted, to only retain the profile information of face, to reduce calculation amount.
According to another embodiment of the disclosure, before extraction after the profile information of scape, profile information can be carried out smooth Processing, i.e. exclusion noise, to keep profile more smooth.
According to another embodiment of the disclosure, in the step of obtaining depth information relevant to target object, depth information is It is obtained from depth information template.Depth information in template be it is pre-set, can be technical staff using tool After having the camera of depth information acquisition function to acquire, the template of generation is also possible to carry out depth information calculating according to picture The depth information template of acquisition.The depth information template can download corresponding template from network, can also make when in use With local depth information template has been saved in, when such as adding depth information to face, face can be searched on network Depth information template can also search face depth information template in the depth information template being locally stored.And local depth Degree information model library can be updated after being connected to network.
According to another embodiment of the disclosure, as shown in fig. 6, the depth information that will acquire is added to pre- addition depth information Region, including:
S601:The key point for extracting the region of pre- addition depth information, is first kind key point;
S602:The key point of depth information template is extracted, is the second class key point;
S603:Based on first kind key point and the second class key point, the depth information in depth information template is added to The region of pre- addition depth information.
First kind key point may for a key point may also be multiple key points, mainly according to depth information template with And the compatible degree of target object determines, the region of such as pre- addition depth information is exactly test pattern, acquires a key point i.e. Can, the region shape of such as pre- addition depth information is nonstandard, then needs to acquire multiple key points.First kind key point and the second class Key point is all to adopt in a like fashion, is one-to-one relationship.
According to another embodiment of the disclosure, as shown in fig. 7, first kind key point and the second class key point are based on, by depth Depth information in information model is added to the region of pre- addition depth information, including:
S701:Triangulation is carried out based on region of the first kind key point to pre- addition depth information;Obtain at least one First area;
S702:Triangulation is carried out to depth information template based on the second class key point, obtains at least one second area;
S703:At least one described second area is corresponded to and fits at least one described first area.
When irregular if target object is more complicated, then the form of triangulation is used, such as by depth information template Subdivision is tri- parts A, B, C, then then corresponding subdivision is A ', B ', three parts C ', fitting in the region of addition depth information in advance When A fitted into A ', B is fitted into B ', C is fitted into C '.
Because the key point of edge and central point is more representative, therefore extract the key in the region of pre- addition depth information When point, that is, when extracting first kind key point, the key point of the specific edges of regions for extracting pre- addition depth information and center.
When the key point in the corresponding region for extracting pre- addition depth information, that is, when extracting the second class key point, specifically mention Take the edges of regions of pre- addition depth information and the key point of center.
And when extracting key point, when the region shape of addition depth information in advance is nonstandard, a pass is such as only extracted Key point cannot well fit together the region of depth information template and pre- addition depth information, therefore the first kind is closed Key point and the second class key point are multiple, and first kind key point is identical with the second class key point number.
First kind key point and the second class key point are to correspond, and the region of such as pre- addition depth information is eyes, are taken The center of eyes is key point, then to be also key point to the center of the eyes as depth information template.
According to another embodiment of the disclosure, as shown in figure 8, first kind key point and the second class key point are based on, by depth Depth information in information model is added to the region of pre- addition depth information, including:
S801:The distance between first kind key point is calculated, obtains first distance;
S802:The distance between second class key point is calculated, obtains second distance;
S803:According to the ratio between first distance and second distance, depth information template is adjusted.
Because depth information template is preset, but target object is variation, different target objects, parameter presence Difference, it is therefore desirable to applicability adjustment be carried out to depth information template, such as believed for the face depth comprising face depth information Template is ceased, when adding depth information, addition object is different people, then the size of its face is also just different, such as somebody Face is larger and somebody's face is smaller, and face is larger or face is smaller cannot all be adapted with template, it is therefore desirable to according to face Size is adjusted depth information, i.e., is zoomed in or out accordingly according to the size of face to template.
If first kind key point is abc, then the second class key point is a*b*c* accordingly.The then length of the line between a and b Degree, the length of the line between b and c, the length of the line between c and a is first distance, then line between a* and b* Length, the length of the line between b* and c*, the length of the line between c* and a* are second distance, by first distance and Two distances carry out ratio to get a scaling out, and can also only take the length of the line between a and b is first distance, is taken The length of line is second distance etc. between a* and b*.
Corresponding disclosed technique scheme also discloses a kind of device for constructing image depth information, as shown in figure 9, dress It sets, including:
Module 901 is obtained, for obtaining depth information relevant to target object;Module is obtained for obtaining and target pair As relevant depth information, specially:
Depth information relevant to target object is obtained from depth information template.
Zone marker module 902, for marking target object to add the region of depth information in advance;
Adding module 903, the depth information for will acquire are added to the region of pre- addition depth information.
Preferably, as shown in Figure 10, zone marker module includes:
Separation module 1001:For separating the foreground and background of the target object;
Extraction module 1002:For extracting the prospect of the target object;
Prospect mark module 1003:For marking the region of pre- addition depth information in the prospect.
Preferably, as shown in figure 11, prospect mark module, including:
Key point extraction module 1101:For extracting the key point in the prospect;
Region division module 1102:For carrying out region division to the prospect based on the key point;
Key point mark module 1103:For marking the key point in the region of pre- addition depth information.
Preferably, as shown in figure 12, prospect mark module, including:
Profile extraction module 1201:For extracting the profile information of the prospect;
Silhouette markup module 1202:For adding the region of depth information in advance in the profile information internal labeling.
Preferably, further include:
Smoothing module 1203:For being smoothed to the profile information that profile extraction module extracts.
Preferably, as shown in figure 13, adding module, including:
First kind key point extraction module 1301:It is first for extracting the key point in the region of pre- addition depth information Class key point;
Second class key point extraction module 1302:It is the second class key point for extracting the key point of depth information template;
Information adding module 1303:It, will be in depth information template for being based on first kind key point and the second class key point Depth information be added to it is pre- addition depth information region.
Preferably, as shown in figure 14, information adding module, including:
First subdivision module 1401:It is cutd open for carrying out triangle based on region of the first kind key point to pre- addition depth information Point, obtain at least one first area;
Second subdivision module 1402:For carrying out triangulation to depth information template based on the second class key point, obtain At least one second area;
It is bonded module 1403:At least one described firstth area is fitted to for corresponding at least one described second area Domain.
Preferably, as shown in figure 15, information adding module, including:
First distance computing module 1501:For calculating the distance between first kind key point, first distance is obtained;
Second distance computing module 1502:For calculating the distance between second class key point, second distance is obtained;
Template adjusts module 1503:For according to the ratio between first distance and second distance, to depth information template It is adjusted;
Template Information adding module 1504:For depth information template adjusted to be added to pre- addition depth information Region.
Figure 16 is the hardware block diagram for illustrating electronic equipment according to an embodiment of the present disclosure.As shown in figure 16, according to this public affairs The electronic equipment 160 for opening embodiment includes memory 161 and processor 162.
The memory 161 is for storing non-transitory computer-readable instruction.Specifically, memory 161 may include one A or multiple computer program products, the computer program product may include various forms of computer readable storage mediums, Such as volatile memory and/or nonvolatile memory.The volatile memory for example may include random access memory (RAM) and/or cache memory (cache) etc..The nonvolatile memory for example may include read-only memory (ROM), hard disk, flash memory etc..
The processor 162 can be central processing unit (CPU) or have data-handling capacity and/or instruction execution energy The processing unit of the other forms of power, and can control other components in electronic equipment 160 to execute desired function.? In one embodiment of the disclosure, which is used to run the computer-readable instruction stored in the memory 161, So that the electronic equipment 160 executes all or part of the steps of the building image depth information of each embodiment of the disclosure above-mentioned.
Those skilled in the art will be understood that solve the technical issues of how obtaining good user experience effect, this It also may include structure well known to communication bus, interface etc. in embodiment, these well known structures should also be included in this public affairs Within the protection scope opened.
Being described in detail in relation to the present embodiment can be with reference to the respective description in foregoing embodiments, and details are not described herein.
Figure 17 is the schematic diagram for illustrating computer readable storage medium according to an embodiment of the present disclosure.As shown in figure 17, According to the computer readable storage medium 170 of the embodiment of the present disclosure, it is stored thereon with non-transitory computer-readable instruction 171. When the non-transitory computer-readable instruction 171 is run by processor, the structure figures of each embodiment of the disclosure above-mentioned are executed As all or part of the steps of depth information.
Above-mentioned computer readable storage medium 170 includes but is not limited to:Optical storage media (such as:CD-ROM and DVD), Magnetic-optical storage medium (such as:MO), magnetic storage medium (such as:Tape or mobile hard disk), with built-in rewritable non-volatile Property memory media (such as:Storage card) and with built-in ROM media (such as:ROM box).
Being described in detail in relation to the present embodiment can be with reference to the respective description in foregoing embodiments, and details are not described herein.
Figure 18 is the hardware structural diagram for illustrating the terminal device according to the embodiment of the present disclosure.As shown in figure 18, the end End 180 includes above-mentioned building image depth information Installation practice.
The terminal device can be implemented in a variety of manners, and the terminal device in the disclosure can include but is not limited to such as Mobile phone, smart phone, laptop, digit broadcasting receiver, PDA (personal digital assistant), PAD (tablet computer), PMP (portable media player), navigation device, vehicle-mounted terminal equipment, vehicle-mounted display terminal, vehicle electronics rearview mirror etc. Mobile terminal device and such as number TV, desktop computer etc. fixed terminal equipment.
As the embodiment of equivalent replacement, which can also include other assemblies.As shown in fig. 7, the terminal 180 may include power supply unit 181, wireless communication unit 182, A/V (audio/video) input unit 183, user input unit 184, sensing unit 185, interface unit 186, controller 187, output unit 188 and storage unit 189 etc..Figure 18 is shown Terminal with various assemblies can also alternatively be implemented it should be understood that being not required for implementing all components shown More or fewer components.
Wherein, wireless communication unit 182 allows the radio communication between terminal 180 and wireless communication system or network. A/V input unit 183 is for receiving audio or video signal.The order that user input unit 184 can be inputted according to user is raw At key input data with the various operations of controlling terminal equipment.Current state, the terminal 180 of the detection terminal 180 of sensing unit 185 Position, user it is mobile for the acceleration or deceleration of the orientation of the presence or absence of touch input of terminal 180, terminal 180, terminal 180 With direction etc., and generate order or the signal of operation for controlling terminal 180.Interface unit 186 is used as at least one External device (ED) connect with terminal 180 can by interface.Output unit 188 is configured to vision, audio and/or tactile side Formula provides output signal.Storage unit 189 can store the software program etc. of the processing and control operation that are executed by controller 187 Deng, or can temporarily store oneself data through exporting or will export.Storage unit 189 may include at least one type Storage medium.Moreover, terminal 180 can be with the network storage for the store function for executing storage unit 189 by network connection Device cooperation.The overall operation of the usual controlling terminal equipment of controller 187.In addition, controller 187 may include for reproduce or The multi-media module of multimedia playback data.Controller 187 can be with execution pattern identifying processing, by what is executed on the touchscreen Handwriting input or picture draw input and are identified as character or image.Power supply unit 181 receives outer under the control of controller 187 Portion's electric power or internal power and electric power appropriate needed for each element of operation and component are provided.
Such as computer software, hardware can be used in the various embodiments for the building image depth information that the disclosure proposes Or any combination thereof computer-readable medium implement.Hardware is implemented, the ratio other side for the video features that the disclosure proposes The various embodiments of method can be believed by using application-specific IC (ASIC), digital signal processor (DSP), number It is number processing unit (DSPD), programmable logic device (PLD), field programmable gate array (FPGA), processor, controller, micro- Controller, microprocessor is designed to execute at least one of electronic unit of function described herein to implement, some In the case of, the various embodiments of the comparison method for the video features that the disclosure proposes can be implemented in controller 187.For Software implementation, the disclosure propose video features comparison method various embodiments can with allow to execute at least one function Can or the individual software module of operation implement.Software code can be answered by the software write with any programming language appropriate Implemented with program (or program), software code can store in storage unit 189 and be executed by controller 187.
Being described in detail in relation to the present embodiment can be with reference to the respective description in foregoing embodiments, and details are not described herein.
The basic principle of the disclosure is described in conjunction with specific embodiments above, however, it is desirable to, it is noted that in the disclosure The advantages of referring to, advantage, effect etc. are only exemplary rather than limitation, must not believe that these advantages, advantage, effect etc. are the disclosure Each embodiment is prerequisite.In addition, detail disclosed above is merely to exemplary effect and the work being easy to understand With, rather than limit, it is that must be realized using above-mentioned concrete details that above-mentioned details, which is not intended to limit the disclosure,.
In the disclosure, relational terms such as first and second and the like be used merely to by an entity or operation with Another entity or operation distinguish, and without necessarily requiring or implying between these entities or operation, there are any this realities The relationship or sequence on border, device, device involved in the disclosure, equipment, system block diagram be only used as illustrative example And it is not intended to require or imply to be attached in such a way that box illustrates, arrange, configure.Such as those skilled in the art What member will be recognized, it can be connected by any way, arrange, configure these devices, device, equipment, system.Such as " comprising ", The word of "comprising", " having " etc. is open vocabulary, is referred to " including but not limited to ", and can be used interchangeably with it.Here institute The vocabulary "or" and "and" used refers to vocabulary "and/or", and can be used interchangeably with it, unless context be explicitly indicated be not as This.Vocabulary " such as " used herein above refers to phrase " such as, but not limited to ", and can be used interchangeably with it.
In addition, as used herein, the "or" instruction separation used in the enumerating of the item started with "at least one" It enumerates, so that enumerating for such as " at least one of A, B or C " means A or B or C or AB or AC or BC or ABC (i.e. A and B And C).In addition, wording " exemplary " does not mean that the example of description is preferred or more preferable than other examples.
It may also be noted that in the system and method for the disclosure, each component or each step are can to decompose and/or again Combination nova.These decompose and/or reconfigure the equivalent scheme that should be regarded as the disclosure.
The technology instructed defined by the appended claims can not departed from and carried out to the various of technology described herein Change, replace and changes.In addition, the scope of the claims of the disclosure is not limited to process described above, machine, manufacture, thing Composition, means, method and the specific aspect of movement of part.Can use carried out to corresponding aspect described herein it is essentially identical Function or realize essentially identical result there is currently or later to be developed processing, machine, manufacture, event group At, means, method or movement.Thus, appended claims include such processing, machine, manufacture, event within its scope Composition, means, method or movement.
The above description of disclosed aspect is provided so that any person skilled in the art can make or use this It is open.Various modifications in terms of these are readily apparent to those skilled in the art, and are defined herein General Principle can be applied to other aspect without departing from the scope of the present disclosure.Therefore, the disclosure is not intended to be limited to Aspect shown in this, but according to principle disclosed herein and the consistent widest range of novel feature.
In order to which purpose of illustration and description has been presented for above description.In addition, this description is not intended to the reality of the disclosure It applies example and is restricted to form disclosed herein.Although already discussed above multiple exemplary aspects and embodiment, this field skill Its certain modifications, modification, change, addition and sub-portfolio will be recognized in art personnel.

Claims (12)

1. a kind of method for constructing image depth information, which is characterized in that including:
Obtain depth information relevant to target object;
Label target object adds the region of depth information in advance;
The depth information that will acquire is added to the region of the pre- addition depth information.
2. the method for building image depth information according to claim 1, which is characterized in that the label target object is pre- The region of depth information is added, including:
Separate the foreground and background of the target object;
Extract the prospect of the target object;
The region of pre- addition depth information is marked in the prospect.
3. the method for building image depth information according to claim 2, which is characterized in that described in the foreground information The region of the middle pre- addition depth information of label, including:
Extract the key point in the prospect;
Region division is carried out to the prospect based on the key point;
The key point in the region of the pre- addition depth information of label.
4. the method for building image depth information according to claim 2, which is characterized in that described to get the bid in the prospect The region of the pre- addition depth information of note, including:
Extract the profile information of the prospect;
The region of depth information is added in advance in the profile information internal labeling.
5. the method for building image depth information according to claim 4, which is characterized in that in the wheel for extracting the prospect After wide information, further include:
The profile information is smoothed.
6. the method for building image depth information according to claim 1, which is characterized in that the acquisition and target object Relevant depth information, specially:
Depth information relevant to target object is obtained from depth information template.
7. the method for building image depth information according to claim 6, which is characterized in that the depth letter that will acquire Breath is added to the region of pre- addition depth information, including:
The key point for extracting the region of the pre- addition depth information, is first kind key point;
The key point of the depth information template is extracted, is the second class key point;
Based on the first kind key point and the second class key point, the depth information in the depth information template is added To the region of pre- addition depth information.
8. the method for building image depth information according to claim 7, which is characterized in that described to be based on the first kind Depth information in the depth information template is added to pre- addition depth information by key point and the second class key point Region, including:
Carry out triangulation based on region of the first kind key point to the pre- addition depth information, obtain at least one the One region;
Triangulation is carried out to the depth information template based on the second class key point, obtains at least one second area;
At least one described second area is corresponded to and fits at least one described first area.
9. the method for building image depth information according to claim 7, which is characterized in that described to be based on the first kind Depth information in the depth information template is added to pre- addition depth information by key point and the second class key point Region, including:
The distance between described first kind key point is calculated, obtains first distance;
The distance between described second class key point is calculated, obtains second distance;
According to the ratio between the first distance and the second distance, the depth information template is adjusted;
The depth information template adjusted is added to the region of pre- addition depth information.
10. a kind of device for constructing image depth information, which is characterized in that including:
Module is obtained, for obtaining depth information relevant to target object;
Zone marker module, for marking target object to add the region of depth information in advance;
Adding module, the depth information for will acquire are added to the region of the pre- addition depth information.
11. a kind of electronic equipment, which is characterized in that the electronic equipment includes:
At least one processor;And
The memory being connect at least one described processor communication;Wherein,
The memory is stored with the instruction that can be executed by least one described processor, and described instruction is by described at least one It manages device to execute, so that at least one described processor is able to carry out any building image depth information of claim 1-9 Method.
12. a kind of non-transient computer readable storage medium, which is characterized in that non-transient computer readable storage medium storage Computer instruction, the computer instruction are used to that computer perform claim to be made to require any building image depth information of 1-9 Method.
CN201810619716.8A 2018-06-13 2018-06-13 Method and device for constructing image depth information Active CN108833881B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201810619716.8A CN108833881B (en) 2018-06-13 2018-06-13 Method and device for constructing image depth information
PCT/CN2019/073070 WO2019237744A1 (en) 2018-06-13 2019-01-25 Method and apparatus for constructing image depth information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810619716.8A CN108833881B (en) 2018-06-13 2018-06-13 Method and device for constructing image depth information

Publications (2)

Publication Number Publication Date
CN108833881A true CN108833881A (en) 2018-11-16
CN108833881B CN108833881B (en) 2021-03-23

Family

ID=64142418

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810619716.8A Active CN108833881B (en) 2018-06-13 2018-06-13 Method and device for constructing image depth information

Country Status (2)

Country Link
CN (1) CN108833881B (en)
WO (1) WO2019237744A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019237744A1 (en) * 2018-06-13 2019-12-19 北京微播视界科技有限公司 Method and apparatus for constructing image depth information
CN113256361A (en) * 2020-02-10 2021-08-13 阿里巴巴集团控股有限公司 Commodity publishing method, image processing method, device, equipment and storage medium
CN116503570A (en) * 2023-06-29 2023-07-28 聚时科技(深圳)有限公司 Three-dimensional reconstruction method and related device for image

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102903143A (en) * 2011-07-27 2013-01-30 国际商业机器公司 Method and system for converting two-dimensional image into three-dimensional image
CN105513007A (en) * 2015-12-11 2016-04-20 惠州Tcl移动通信有限公司 Mobile terminal based photographing beautifying method and system, and mobile terminal
CN107833178A (en) * 2017-11-24 2018-03-23 维沃移动通信有限公司 A kind of image processing method, device and mobile terminal

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108833881B (en) * 2018-06-13 2021-03-23 北京微播视界科技有限公司 Method and device for constructing image depth information

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102903143A (en) * 2011-07-27 2013-01-30 国际商业机器公司 Method and system for converting two-dimensional image into three-dimensional image
CN105513007A (en) * 2015-12-11 2016-04-20 惠州Tcl移动通信有限公司 Mobile terminal based photographing beautifying method and system, and mobile terminal
CN107833178A (en) * 2017-11-24 2018-03-23 维沃移动通信有限公司 A kind of image processing method, device and mobile terminal

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019237744A1 (en) * 2018-06-13 2019-12-19 北京微播视界科技有限公司 Method and apparatus for constructing image depth information
CN113256361A (en) * 2020-02-10 2021-08-13 阿里巴巴集团控股有限公司 Commodity publishing method, image processing method, device, equipment and storage medium
CN116503570A (en) * 2023-06-29 2023-07-28 聚时科技(深圳)有限公司 Three-dimensional reconstruction method and related device for image
CN116503570B (en) * 2023-06-29 2023-11-24 聚时科技(深圳)有限公司 Three-dimensional reconstruction method and related device for image

Also Published As

Publication number Publication date
WO2019237744A1 (en) 2019-12-19
CN108833881B (en) 2021-03-23

Similar Documents

Publication Publication Date Title
CN111415422B (en) Virtual object adjustment method and device, storage medium and augmented reality equipment
CN108986016A (en) Image beautification method, device and electronic equipment
CN110458805B (en) Plane detection method, computing device and circuit system
US20100014781A1 (en) Example-Based Two-Dimensional to Three-Dimensional Image Conversion Method, Computer Readable Medium Therefor, and System
CN108833881A (en) Construct the method and device of image depth information
US20220237812A1 (en) Item display method, apparatus, and device, and storage medium
CN109961406A (en) Image processing method and device and terminal equipment
CN108830892A (en) Face image processing process, device, electronic equipment and computer readable storage medium
CN103955918A (en) Full-automatic fine image matting device and method
CN103236160A (en) Road network traffic condition monitoring system based on video image processing technology
CN110084797B (en) Plane detection method, plane detection device, electronic equipment and storage medium
CN103168316A (en) User interface control device, user interface control method, computer program, and integrated circuit
EP4165606A1 (en) Object reconstruction with texture parsing
CN106504264A (en) Video foreground image extraction method and device
CN104036270A (en) Instant automatic translation device and method
CN106296789A (en) A kind of it is virtually implanted method and the terminal that object shuttles back and forth in outdoor scene
CN108921798A (en) The method, apparatus and electronic equipment of image procossing
CN111400423B (en) Smart city CIM three-dimensional vehicle pose modeling system based on multi-view geometry
CN107610148B (en) Foreground segmentation method based on binocular stereo vision system
CN112684892A (en) Augmented reality ammunition recognition glasses-handle continuous carrying system
US20220358694A1 (en) Method and apparatus for generating a floor plan
CN117011493B (en) Three-dimensional face reconstruction method, device and equipment based on symbol distance function representation
CN108898551A (en) The method and apparatus that image merges
CN108961314A (en) Moving image generation method, device, electronic equipment and computer readable storage medium
CN109426522A (en) Interface processing method, device, equipment, medium and the operating system of mobile device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20221228

Address after: Room 1445A, No. 55 Xili Road, China (Shanghai) Pilot Free Trade Zone, Pudong New Area, Shanghai, 200120

Patentee after: Honey Grapefruit Network Technology (Shanghai) Co.,Ltd.

Address before: 100080 408, 4th floor, 51 Zhichun Road, Haidian District, Beijing

Patentee before: BEIJING MICROLIVE VISION TECHNOLOGY Co.,Ltd.