CN107507269A - Personalized three-dimensional model generating method, device and terminal device - Google Patents
Personalized three-dimensional model generating method, device and terminal device Download PDFInfo
- Publication number
- CN107507269A CN107507269A CN201710642898.6A CN201710642898A CN107507269A CN 107507269 A CN107507269 A CN 107507269A CN 201710642898 A CN201710642898 A CN 201710642898A CN 107507269 A CN107507269 A CN 107507269A
- Authority
- CN
- China
- Prior art keywords
- destination object
- depth image
- personalized
- initial model
- model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/521—Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Optics & Photonics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention discloses a kind of personalized three-dimensional model generating method, device and terminal device, wherein, this method includes:The depth image of destination object is obtained using structure light;According to the depth image, the characteristic information of the destination object is determined;Whether include initial model corresponding with the destination object in judgment models storehouse;If so, being then modified according to the characteristic information of the destination object to the initial model, personalized three-dimensional model corresponding to the destination object is generated.Thus, the generation time of threedimensional model is shortened, improves model formation efficiency, and the model generated meets the individualized feature of destination object, meets the individual demand of user.
Description
Technical field
The present invention relates to camera technology field, more particularly to a kind of personalized three-dimensional model generating method, device and terminal
Equipment.
Background technology
Three-dimensional (3D) model, refer to utilize computer technology, the real-world object constructed corresponds in virtual three-dimensional space
Three-dimensional data model.Threedimensional model, due to the intuitive of its data, it has also become the study hotspot of computer vision field.
Such as virtual reality (Virtual Reality, abbreviation VR), robot navigation, 3D printing etc..
At present, conventional a kind of threedimensional model building mode, it is to utilize depth image, establishes the threedimensional model of object.Build
, it is necessary to obtain the depth image of object first during mould, then depth image is handled, obtains the point off density in depth image
Cloud data, and then the reconstruction of a cloud network is carried out according to intensive cloud data to target, then multiframe depth image is closed
And, registration, to generate the threedimensional model of object.
But the generating mode of above-mentioned threedimensional model is complicated, time length is expended, users ' individualized requirement can not be met.
The content of the invention
It is contemplated that at least solves one of above-mentioned technical problem to a certain extent.
Therefore, the first aspect of the application proposes a kind of personalized three-dimensional model generating method, threedimensional model is shortened
The time is generated, improves model formation efficiency, and the model generated meets the individualized feature of destination object, meets user's
Individual demand.
The second aspect of the application proposes a kind of exposure compensating device.
The third aspect of the application proposes a kind of terminal device.
The 4th aspect of the application proposes a kind of computer-readable recording medium.
The 5th aspect of the application proposes a kind of computer program.
In order to solve the above problems, on the one hand the application proposes a kind of personalized three-dimensional model generating method, methods described
Including:
The depth image of destination object is obtained using structure light;
According to the depth image, the characteristic information of the destination object is determined;
Whether include initial model corresponding with the destination object in judgment models storehouse;
If so, being then modified according to the characteristic information of the destination object to the initial model, the target is generated
Personalized three-dimensional model corresponding to object.
The personalized three-dimensional model generating method that the embodiment of the present application provides, is getting target pair using structure light first
After the depth image of elephant, you can according to depth image, the characteristic information of destination object is determined, and then it is determined that being included in model library
During initial model corresponding with destination object, you can according to the characteristic information of destination object, initial model is revised, with life
Into personalized three-dimensional model corresponding with destination object.Thus, by using the characteristic information of destination object, pair and destination object
Corresponding initial model is modified, you can generation personalized three-dimensional model corresponding with destination object, shortens threedimensional model
The generation time, improve model formation efficiency, and the model generated meets the individualized feature of destination object, meets user
Individual demand.
In order to solve the above problems, the application second aspect proposes a kind of personalized three-dimensional model generating means, wherein, institute
Stating personalized three-dimensional model generating means includes:
Acquisition module, for obtaining the depth image of destination object using structure light;
First determining module, for according to the depth image, determining the characteristic information of the destination object;
Judge module, for whether including initial model corresponding with the destination object in judgment models storehouse;
Processing module, if including initial model corresponding with the destination object for model library, according to the mesh
The characteristic information of object is marked, the initial model is modified, generates personalized three-dimensional model corresponding to the destination object.
In order to solve the above problems, the application third aspect proposes a kind of terminal device, including:Processor, memory and
Image processing circuit;
Wherein, the memory is used to store executable program code;The processor is by reading in the memory
The executable program code of storage, and the depth image of described image process circuit output, to realize as appointed in claim 1-4
Personalized three-dimensional model generating method described in one.
In order to solve the above problems, the application fourth aspect proposes a kind of computer-readable recording medium, is stored thereon with
Computer program, it is characterised in that personalized three-dimensional model as described in relation to the first aspect is realized when the program is executed by processor
Generation method.
In order to solve the above problems, the aspect of the application the 5th proposes a kind of computer program product, when the computer journey
When instruction processing unit in sequence product performs, personalized three-dimensional model generating method as described in relation to the first aspect is performed.
The additional aspect of the present invention and advantage will be set forth in part in the description, and will partly become from the following description
Obtain substantially, or recognized by the practice of the present invention.
Brief description of the drawings
Of the invention above-mentioned and/or additional aspect and advantage will become from the following description of the accompanying drawings of embodiments
Substantially and it is readily appreciated that, wherein:
Fig. 1 is the flow chart of the personalized three-dimensional model generating method of the application one embodiment;
Fig. 2 is structured light three-dimensional vision principle schematic;
Fig. 2A is the speckle distribution schematic diagram of uneven structure light;
Fig. 2 B are the speckle distribution schematic diagram of uniform structure light;
Fig. 3 is the flow chart of the personalized three-dimensional model generating method of the application another embodiment;
Fig. 4 is the generating process schematic diagram for the initial model that the embodiment of the present application provides;
Fig. 5 is the structure chart of the personalized three-dimensional model generating means of the application one embodiment;
Fig. 6 is the structure chart of the personalized three-dimensional model generating means of the application another embodiment;
Fig. 7 is the structure chart for the terminal device that the application one embodiment provides;
Fig. 8 is the schematic diagram of image processing circuit in one embodiment.
Embodiment
Embodiments of the invention are described below in detail, the example of the embodiment is shown in the drawings, wherein from beginning to end
Same or similar label represents same or similar element or the element with same or like function.Below with reference to attached
The embodiment of figure description is exemplary, it is intended to for explaining the present invention, and is not considered as limiting the invention.
Below with reference to the accompanying drawings personalized three-dimensional model generating method, device and the terminal device of the embodiment of the present invention are described.
Various embodiments of the present invention are complicated for existing threedimensional model generation method, expend time length, can not meet user
The problem of individual demand, propose a kind of personalized three-dimensional model generating method.By using structure light, destination object is obtained
Depth image, and then according to depth image, then can be according to the feature of destination object after the characteristic information for determining destination object
Information, the target initial model in model library is modified, to generate personalized three-dimensional model corresponding with destination object.By
Only it is the feature according to destination object in the threedimensional model of generation, is obtained after being modified to initial model, without to more
The processing such as frame depth image merges, registration, therefore, shorten the generation time of threedimensional model, improve model generation effect
Rate, and the model generated meets the individualized feature of destination object, meets the individual demand of user.
The personalized three-dimensional model generating method of the embodiment of the present application is illustrated with reference to Fig. 1.
Fig. 1 is the flow chart of the personalized three-dimensional model generating method of the application one embodiment.
As shown in figure 1, this method includes:
Step 101, the depth image of destination object is obtained using structure light.
Wherein, personalized three-dimensional model generating method provided in an embodiment of the present invention, can be provided by the embodiment of the present invention
Personalized three-dimensional model generating means perform.
Specifically, the personalized three-dimensional model generating means, can be configured in arbitrarily has depth image shoot function
Terminal device in.Wherein, the type of terminal device is a lot, can be selected according to using needs, such as:Mobile phone, computer,
Camera etc..
During specific implementation, the depth image of object is obtained using structure light, is realized based on optic triangle method measuring principle
's.As shown in Fig. 2 Fig. 2 is structured light three-dimensional vision principle schematic.
As shown in Fig. 2 optical projection device by the project structured light of certain pattern in body surface, formed on the surface by quilt
Survey the 3-D view for the striation that object surface shape is modulated.The 3-D view by the camera detection in another location, from
And obtain striation two dimension fault image.The distortion degree of striation depend on relative position between optical projection device and video camera and
Profiling object surface (height).Intuitively, the displacement (or skew) shown along striation is highly proportional to body surface, kink
The change of plane is illustrated, discontinuously shows the physical clearance on surface, when the relative position between optical projection device and video camera
A timing is put, object surface tri-dimensional profile can be reappeared by the two-dimentional optical strip image coordinate to distort.
In actual use, the difference of the beam mode projected according to optical projection device, structure optical mode can be divided into
Structure light pattern, line-structured light pattern, multiple line structure optical mode, area-structure light pattern and phase method etc..The embodiment of the present application
In, can be as needed, any form of structure optical mode is selected, obtains the depth image of destination object.
Wherein, the destination object in the embodiment of the present application, can be any object with solid form, such as human body,
Building, plant etc..
In the application one kind preferably way of realization, structure light heterogeneous can be used, to obtain destination object
Depth image.
Specifically, structure light heterogeneous, can be formed by a variety of methods.
For example frosted glass can be irradiated by infrared laser light source, formed so as to produce interference in the head zone of user
Structure light heterogeneous.
Or structure light heterogeneous can be formed by way of diffraction optical element is projected.Specifically, can
With by, by single or multiple diffraction optical elements, being formed after single LASER Light Source collimation in the head zone of user heterogeneous
Structure light.
Or can also be directly by the laser array that is randomly distributed by diffraction optical element, in the header area of user
Domain forms the speckle that is randomly distributed consistent with laser array, i.e., structure light heterogeneous.In this way, can also control
The details distribution of speckle processed, is not construed as limiting herein.
It should be noted that respectively with uneven structure light and uniform project structured light body surface when, it is uneven
Structure light speckle distribution as shown in Figure 2 A, the distribution of the speckle of uniform structure light is as shown in Figure 2 B.It was found from Fig. 2A and 2B,
In the region of formed objects, 11 spots are included in Fig. 2A, include 16 spots in Fig. 2 B, i.e., uneven structure light is included
Spot it is less than the spot that uniform structure light includes.Therefore, using uneven structure light, face's depth of user is obtained
Image, the energy of consumption is less, and energy-saving effect is more preferable, improves Consumer's Experience.
Step 102, according to the depth image, the characteristic information of the destination object is determined.
Wherein, the characteristic information of destination object, refer to available for the information for distinguishing destination object and other homogeneous objects.Citing
Messenger, characteristic information, can be depth, brightness value, leg corresponding to the face difference of destination object if destination object is behaved
The information such as length, height;Or if destination object is building, the characteristic information of destination object can include:The height of building
Degree, alien invasion color, profile etc..
Step 103, whether initial model corresponding with the destination object is included in judgment models storehouse.
Step 104, if so, then according to the characteristic information of the destination object, the initial model is modified, generated
Personalized three-dimensional model corresponding to the destination object.
Specifically, in model library in personalized three-dimensional model generating means, the initial of various objects can be pre-configured with
Model.So as to after the depth image of destination object is got, you can according to the characteristic information of destination object, pair and destination object
Corresponding initial model is modified, to generate the personalized model of destination object.
Wherein, all initial models in model library, can be configure in advance or personalized three-dimensional model life
Into device, by the depth image to various objects, generated after being trained.The present embodiment is not limited this.
During actual use, if there is initial model corresponding with destination object in model library, then personalized three-dimensional model is given birth to
Initial model corresponding with destination object can be obtained directly from model library into device, and then is believed according to the feature of destination object
Breath, is modified to the initial model, to generate personalized three-dimensional model corresponding with destination object.
For example, if destination object is behaved, and there is initial model corresponding with people in model library, then personalized three-dimensional
Model generating means, you can by being parsed to the depth image of the people, determine the depth value and brightness value of the face of the people
Deng, and then according to each characteristic information of the people, initial model is modified, you can generate the personalized three-dimensional model of the people.
Only it is the characteristic information according to destination object, to general due to the generating process of the threedimensional model of destination object
The process that initial model is modified, without carrying out the processing such as image denoising, smooth, segmentation, merging, registration, so as to save
Generation time of personalized three-dimensional model, improve the formation speed of threedimensional model.
It is understood that if initial model corresponding with destination object is not included in model library, then personalized three-dimensional
Model generating means can also be handled by the depth image of the destination object to currently obtaining, with generation and the target pair
As corresponding threedimensional model.
In actual use, after the personalized three-dimensional model for generating destination object, can enter afterwards to destination object
During row is taken pictures, threedimensional model corresponding with the destination object is directly invoked, and then according to the current part of destination object
The depth information and monochrome information of characteristic point, the depth image of the destination object is directly generated, so as to improve depth image
Acquisition speed and efficiency.
Personalized three-dimensional model generating method provided in an embodiment of the present invention, getting target pair using structure light first
After the depth image of elephant, you can according to depth image, the characteristic information of destination object is determined, and then it is determined that being included in model library
During initial model corresponding with destination object, you can according to the characteristic information of destination object, initial model is revised, with life
Into personalized three-dimensional model corresponding with destination object.Thus, by using the characteristic information of destination object, pair and destination object
Corresponding initial model is modified, you can generation personalized three-dimensional model corresponding with destination object, shortens threedimensional model
The generation time, improve model formation efficiency, and the model generated meets the individualized feature of destination object, meets user
Individual demand.
By above-mentioned analysis, initial model can be modified according to the characteristic information of destination object, with generation
Personalized three-dimensional model corresponding to destination object.During specific implementation, due to being stored with the model of a variety of objects in model library, or
Same target corresponds to multiple models,, can be with the embodiment of the present application in order to further shorten the generation time of threedimensional model
According to the attribute information of destination object, the initial model for selecting most to match is modified, and with reference to Fig. 3, the present invention is implemented
The personalized three-dimensional model generating method that example provides is further described.
Fig. 3 is the flow chart of the personalized three-dimensional model generating method of the application another embodiment.
As shown in figure 3, this method includes:
Step 301, the depth image of destination object is obtained using structure light.
Step 302, according to the depth image, the characteristic information and attribute information of the destination object are determined.
Wherein, the attribute information of destination object, the information that can be accurately positioned the class belonging to the object is referred to.For example,
If destination object is behaved, then attribute information, can include the one or more in following information:Sex, age, height, body
Weight etc.;If destination object is chair, then its attribute information, can include the one or more in following information:Highly, material
Matter, supporting leg quantity, whether there is backrest etc..
Step 303, whether there is initial model corresponding with the attribute information of the destination object in judgment models storehouse, if bag
Include, then perform step 304, otherwise, perform step 305.
Specifically, the corresponding relation of different attribute informations and initial model can be stored in model library in advance, so that
After the attribute information that destination object is determined, you can obtain initial model corresponding with destination object.
For example, if storing initial model A, B, C and D in model library.Wherein, attribute information corresponding to initial model A
For:Women, 160 centimetres of (cm) -165cm of height, -40 years old 20 years old;Attribute information corresponding to initial model B is:Women, height
155cm-160cm, -40 years old 20 years old;Attribute information corresponding to initial model C is:Male, height 170cm-175cm, 20 years old-
40;Attribute information corresponding to initial model D is:Male, height 170cm-175cm, -65 years old 40 years old.
So it is determined that the corresponding attribute information of destination object is:Women, height 155cm-160cm, more than 20 years old when,
Can be according to default corresponding relation, it is determined that initial model corresponding with the destination object is B.
It should be noted that initial model corresponding with different attribute information can be by more corresponding to an object
What frame depth image obtained after being handled;Or in order to improve the reliability of initial model and accuracy, initial model also may be used
To be obtained after handling the depth image of multiple multiple objects of attribute information identical.I.e. in the embodiment of the present application
Before above-mentioned steps 303, in addition to:
It is determined that the attribute information identical candidate target collection with the destination object;
Each depth image according to corresponding to the candidate target concentrates each object difference, generates the initial model.
Specifically, personalized three-dimensional model generating means, when training generates initial model, basis will can be trained first
Initial model corresponding to attribute information, it is determined that the candidate target collection with the attribute information, and then concentrated according to candidate target
Depth image corresponding to each object, generate initial model corresponding with the attribute information.
For example, it is to attribute information corresponding to the initial model of training:Girl, less than 5 years old.It is so personalized
Threedimensional model generating means, you can from existing a large amount of depth images, choose the depth image conduct for including less than 5 years old girl
Image to be trained, and then by handling the depth image selected, to generate introductory die corresponding with the attribute information
Type.
Specifically, according to each object respectively corresponding to each depth image, generate the process of the target initial model, can be with
Realized by mode as shown in Figure 4.Fig. 4 is the generating process schematic diagram for the initial model that the embodiment of the present application provides.
As shown in figure 4, including:
Step 401, it is determined that attribute information identical candidate target collection with destination object.
Step 402, each object is concentrated to the candidate target, and corresponding each depth image is handled respectively, obtains each depth
Spend the intensive cloud data of image.
Specifically, due in each depth image corresponding to each object difference, the non-process mould in background, environment may be included
Version, therefore personalized three-dimensional model generating means, each depth image corresponding to each object difference can be carried out at denoising first
Reason and smoothing processing, to obtain the image of object region, and then by processing such as front and rear scape segmentations, by object and Background
Segmentation.
Step 403, according to the intensive cloud data, a cloud mesh reconstruction is carried out to each object.
After each object is extracted from depth image, you can intensive points are extracted from the depth image of each object
According to, and then according to the intensive point data of extraction, these points are connected into network.For example closed according to the citing of each point spatially
System, connects into triangular net, and then these networks are carried out by the point of same level, or point of the distance in threshold range
Splicing, it is possible to generate the threedimensional model of each object.
Step 404, each depth image after the reconstruction is merged, registration, generates the initial model.
Specifically, by the depth image of multiple attribute information identical objects is merged and registration, you can so that
The applicability and reliability of the initial model of generation are higher.So that the personalized three-dimensional model obtained according to the initial model
It is more accurate, more reliable.
Step 304, according to the characteristic information of the destination object, the initial model is modified, generates the mesh
Mark personalized three-dimensional model corresponding to object.
Step 305, according to the depth image to the destination object, introductory die corresponding to the destination object is generated
Type.
, can only root it is understood that when it is determined that not including initial model corresponding with destination object in model library
According to the depth image of destination object, initial model corresponding with destination object is generated;Can also be by the above-mentioned means, according to multiple
With the depth image of the other objects of attribute information identical of destination object, initial model corresponding with destination object is generated, this
Embodiment is not limited this.
The personalized three-dimensional model generating method that the embodiment of the present application provides, destination object is being got using structure light
After depth image, first according to depth image, the characteristic information and attribute information of destination object are determined, and then according to destination object
Attribute information, corresponding with destination object initial model is obtained from model library, then according to the characteristic information of destination object,
Initial model is modified, generates personalized three-dimensional model corresponding to destination object.Thus, the generation of threedimensional model is shortened
Time, model formation efficiency is improved, and the model generated meets the individualized feature of destination object, meets the individual character of user
Change demand.And initial model corresponding with destination object, it is to be generated according to the depth image of the multiple objects of attribute identical, from
And improve the reliability and accuracy of initial model so that the personalized three-dimensional model of generation is more accurate, more reliable.
Fig. 5 is the structure chart of the personalized three-dimensional model generating means of the application one embodiment.
As shown in figure 5, the personalized three-dimensional model generating means include:
Acquisition module 51, for obtaining the depth image of destination object using structure light;
First determining module 52, for according to the depth image, determining the characteristic information of the destination object;
Judge module 53, for whether including initial model corresponding with the destination object in judgment models storehouse;
Processing module 54, if including initial model corresponding with the destination object for model library, according to
The characteristic information of destination object, the initial model is modified, generates personalized three-dimensional mould corresponding to the destination object
Type.
Wherein, the personalized three-dimensional model generating means that the present embodiment provides, can be performed provided in an embodiment of the present invention
Personalized three-dimensional model generating method.
Specifically, the personalized three-dimensional model generating means, can be configured in arbitrarily has depth image shoot function
Terminal device in.Wherein, the type of terminal device is a lot, can be selected according to using needs, such as:Mobile phone, computer,
Camera etc..
In a kind of possible way of realization of the present embodiment, above-mentioned judge module, it is specifically used for:
According to the depth image, the attribute information of the destination object is determined;
Whether with the attribute information of the destination object corresponding initial model is had in judgment models storehouse.
It should be noted that the explanation in previous embodiment to personalized three-dimensional model generating method embodiment is also fitted
For the personalized three-dimensional model generating means of the embodiment, here is omitted.
The personalized three-dimensional model generating means that the embodiment of the present application provides, are getting target pair using structure light first
After the depth image of elephant, you can according to depth image, the characteristic information of destination object is determined, and then it is determined that being included in model library
During initial model corresponding with destination object, you can according to the characteristic information of destination object, initial model is revised, with life
Into personalized three-dimensional model corresponding with destination object.Thus, by using the characteristic information of destination object, pair and destination object
Corresponding initial model is modified, you can generation personalized three-dimensional model corresponding with destination object, shortens threedimensional model
The generation time, improve model formation efficiency, and the model generated meets the individualized feature of destination object, meets user
Individual demand.
Fig. 6 is the structure chart of the personalized three-dimensional model generating means of the application another embodiment.
As shown in fig. 6, on the basis of shown in Fig. 5, the personalized three-dimensional model generating means, in addition to:
Second determining module 61, for determining the attribute information identical candidate target collection with the destination object;
Generation module 62, for each depth image according to corresponding to each object difference of candidate target concentration, generate institute
State initial model.
Specifically, the generation module 62, is specifically used for:
To each object, corresponding each depth image is handled respectively, obtains the point off density cloud number of each depth image
According to;
According to the intensive cloud data, a cloud mesh reconstruction is carried out to each object;
Each depth image after the reconstruction is merged, registration, generate the initial model.
It should be noted that the explanation in previous embodiment to personalized three-dimensional model generating method embodiment is also fitted
For the personalized three-dimensional model generating means of the embodiment, here is omitted.
The personalized three-dimensional model generating means that the embodiment of the present application provides, destination object is being got using structure light
After depth image, first according to depth image, the characteristic information and attribute information of destination object are determined, and then according to destination object
Attribute information, corresponding with destination object initial model is obtained from model library, then according to the characteristic information of destination object,
Initial model is modified, generates personalized three-dimensional model corresponding to destination object.Thus, the generation of threedimensional model is shortened
Time, model formation efficiency is improved, and the model generated meets the individualized feature of destination object, meets the individual character of user
Change demand.And initial model corresponding with destination object, it is to be generated according to the depth image of the multiple objects of attribute identical, from
And improve the reliability and accuracy of initial model so that the personalized three-dimensional model of generation is more accurate, more reliable.
Further aspect of the present invention embodiment also proposes a kind of terminal device.
Fig. 7 is the structure chart for the terminal device that the application one embodiment provides.
Wherein, the type of terminal device is a lot, can be selected according to using needs, such as:Mobile phone, computer, camera
Deng.
Fig. 7 is illustrated by mobile phone of terminal device.
As shown in fig. 7, the terminal device includes:Processor 71, memory 72 and image processing circuit 73.
Wherein, the memory 72 is used to store executable program code;The processor 71 is by reading the storage
The executable program code stored in device 72, and the depth image that described image process circuit 73 exports, to realize such as foregoing reality
Apply the personalized three-dimensional model generating method in example.
Specifically, image processing circuit 73 can utilize hardware and/or component software to realize, it may include define ISP
The various processing units of (Image Signal Processing, picture signal processing) pipeline.
Fig. 8 is the schematic diagram of image processing circuit in one embodiment.As shown in figure 8, for purposes of illustration only, only show and this
The various aspects of the related image processing techniques of inventive embodiments.
As shown in figure 8, image processing circuit includes imaging device 810, ISP processors 830 and control logic device 840.Into
As equipment 810 may include there is one or more lens 812, the camera of imaging sensor 814 and structured light projector 816.
Structured light projector 816 is by structured light projection to measured object.Wherein, the structured light patterns can be laser stripe, Gray code, sine
Striped or, speckle pattern of random alignment etc..Imaging sensor 814 catches the structure light image that projection is formed to measured object,
And send structure light image to ISP processors 830, acquisition measured object is demodulated to structure light image by ISP processors 830
Depth information.Meanwhile imaging sensor 814 can also catch the color information of measured object.It is of course also possible to by two images
Sensor 814 catches the structure light image and color information of measured object respectively.
Wherein, by taking pattern light as an example, ISP processors 830 are demodulated to structure light image, are specifically included, from this
The speckle image of measured object is gathered in structure light image, by the speckle image of measured object with reference speckle image according to pre-defined algorithm
View data calculating is carried out, each speckle point for obtaining speckle image on measured object dissipates relative to reference to the reference in speckle image
The displacement of spot.The depth value of each speckle point of speckle image is calculated using trigonometry conversion, and according to the depth
Angle value obtains the depth information of measured object.
It is, of course, also possible to obtain the depth image by the method for binocular vision or based on jet lag TOF method
Information etc., is not limited herein, as long as can obtain or belong to this by the method for the depth information that measured object is calculated
The scope that embodiment includes.
After the color information that ISP processors 830 receive the measured object that imaging sensor 814 captures, it can be tested
View data corresponding to the color information of thing is handled.ISP processors 830 are analyzed view data can be used for obtaining
It is determined that and/or imaging device 810 one or more control parameters image statistics.Imaging sensor 814 may include color
Color filter array (such as Bayer filters), imaging sensor 814 can obtain to be caught with each imaging pixel of imaging sensor 814
Luminous intensity and wavelength information, and provide one group of raw image data being handled by ISP processors 830.
ISP processors 830 handle raw image data pixel by pixel in various formats.For example, each image pixel can
Bit depth with 8,10,12 or 14 bits, ISP processors 830 can be carried out at one or more images to raw image data
Reason operation, image statistics of the collection on view data.Wherein, image processing operations can be by identical or different bit depth
Precision is carried out.
ISP processors 830 can also receive pixel data from video memory 820.Video memory 820 can be memory device
The independent private memory in a part, storage device or electronic equipment put, and may include DMA (Direct Memory
Access, direct direct memory access (DMA)) feature.
When receiving raw image data, ISP processors 830 can carry out one or more image processing operations.
After ISP processors 830 get color information and the depth information of measured object, it can be merged, obtained
3-D view.Wherein, can be extracted by least one of appearance profile extracting method or contour feature extracting method corresponding
The feature of measured object.Such as pass through active shape model method ASM, active appearance models method AAM, PCA PCA, discrete
The methods of cosine transform method DCT, the feature of measured object is extracted, is not limited herein.It will be extracted respectively from depth information again
The feature of measured object and feature progress registration and the Fusion Features processing that measured object is extracted from color information.Herein refer to
Fusion treatment can be the feature that will be extracted in depth information and color information directly combination or by different images
Middle identical feature combines after carrying out weight setting, it is possibility to have other amalgamation modes, finally according to the feature after fusion, generation
3-D view.
The view data of 3-D view can be transmitted to video memory 820, to carry out other place before shown
Reason.ISP processors 830 from the reception processing data of video memory 820, and to the processing data carry out original domain in and
Image real time transfer in RGB and YCbCr color spaces.The view data of 3-D view may be output to display 860, for
User watches and/or further handled by graphics engine or GPU (Graphics Processing Unit, graphics processor).
In addition, the output of ISP processors 830 also can be transmitted to video memory 820, and display 860 can be from video memory 820
Read view data.In one embodiment, video memory 820 can be configured as realizing one or more frame buffers.This
Outside, the output of ISP processors 830 can be transmitted to encoder/decoder 850, so as to encoding/decoding image data.The figure of coding
As data can be saved, and decompressed before being shown in the equipment of display 860.Encoder/decoder 850 can by CPU or
GPU or coprocessor are realized.
The image statistics that ISP processors 830 determine, which can be transmitted, gives the unit of control logic device 840.Control logic device 840
It may include the processor and/or microcontroller for performing one or more routines (such as firmware), one or more routines can be according to connecing
The image statistics of receipts, determine the control parameter of imaging device 810.
It should be noted that the foregoing explanation to personalized three-dimensional model generating method embodiment is also applied for the reality
The terminal device of example is applied, its realization principle is similar, and here is omitted.
The embodiment of the present application has reintroduced a kind of computer-readable recording medium, is stored thereon with computer program, when this
Realized when program is executed by processor such as the personalized three-dimensional model generating method in previous embodiment.
For the above-mentioned purpose, the embodiment of the present application proposes a kind of computer program product, when the computer program produces
When instruction processing unit in product performs, perform such as the personalized three-dimensional model generating method in previous embodiment.
It should be noted that herein, such as first and second or the like relational terms are used merely to a reality
Body or operation make a distinction with another entity or operation, and not necessarily require or imply and deposited between these entities or operation
In any this actual relation or order.Moreover, term " comprising ", "comprising" or its any other variant are intended to
Nonexcludability includes, so that process, method, article or equipment including a series of elements not only will including those
Element, but also the other element including being not expressly set out, or it is this process, method, article or equipment also to include
Intrinsic key element.In the absence of more restrictions, the key element limited by sentence "including a ...", it is not excluded that
Other identical element also be present in process, method, article or equipment including the key element.
Expression or logic and/or step described otherwise above herein in flow charts, for example, being considered use
In the order list for the executable instruction for realizing logic function, may be embodied in any computer-readable medium, for
Instruction execution system, device or equipment (such as computer based system including the system of processor or other can be held from instruction
The system of row system, device or equipment instruction fetch and execute instruction) use, or combine these instruction execution systems, device or set
It is standby and use.For the purpose of this specification, " computer-readable medium " can any can be included, store, communicate, propagate or pass
Defeated program is for instruction execution system, device or equipment or the dress used with reference to these instruction execution systems, device or equipment
Put.The more specifically example (non-exhaustive list) of computer-readable medium includes following:Electricity with one or more wiring
Connecting portion (electronic installation), portable computer diskette box (magnetic device), random access memory (RAM), read-only storage
(ROM), erasable edit read-only storage (EPROM or flash memory), fiber device, and portable optic disk is read-only deposits
Reservoir (CDROM).In addition, computer-readable medium, which can even is that, to print the paper of described program thereon or other are suitable
Medium, because can then enter edlin, interpretation or if necessary with it for example by carrying out optical scanner to paper or other media
His suitable method is handled electronically to obtain described program, is then stored in computer storage.
It should be appreciated that each several part of the present invention can be realized with hardware, software, firmware or combinations thereof.Above-mentioned
In embodiment, software that multiple steps or method can be performed in memory and by suitable instruction execution system with storage
Or firmware is realized.If, and in another embodiment, can be with well known in the art for example, realized with hardware
Any one of row technology or their combination are realized:With the logic gates for realizing logic function to data-signal
Discrete logic, have suitable combinational logic gate circuit application specific integrated circuit, programmable gate array (PGA), scene
Programmable gate array (FPGA) etc..
It should be noted that in the description of this specification, reference term " one embodiment ", " some embodiments ", " show
The description of example ", " specific example " or " some examples " etc. mean to combine the specific features of the embodiment or example description, structure,
Material or feature are contained at least one embodiment or example of the present invention.In this manual, above-mentioned term is shown
The statement of meaning property is necessarily directed to identical embodiment or example.Moreover, specific features, structure, material or the spy of description
Point can combine in an appropriate manner in any one or more embodiments or example.In addition, in the case of not conflicting,
Those skilled in the art can be by the different embodiments or example described in this specification and different embodiments or example
Feature is combined and combined.
In the description of this specification, reference term " one embodiment ", " some embodiments ", " example ", " specifically show
The description of example " or " some examples " etc. means specific features, structure, material or the spy for combining the embodiment or example description
Point is contained at least one embodiment or example of the present invention.In this manual, to the schematic representation of above-mentioned term not
Identical embodiment or example must be directed to.Moreover, specific features, structure, material or the feature of description can be with office
Combined in an appropriate manner in one or more embodiments or example.In addition, in the case of not conflicting, the skill of this area
Art personnel can be tied the different embodiments or example and the feature of different embodiments or example described in this specification
Close and combine.
Although embodiments of the invention have been shown and described above, it is to be understood that above-described embodiment is example
Property, it is impossible to limitation of the present invention is interpreted as, one of ordinary skill in the art within the scope of the invention can be to above-mentioned
Embodiment is changed, changed, replacing and modification.
Claims (10)
- A kind of 1. personalized three-dimensional model generating method, it is characterised in that including:The depth image of destination object is obtained using structure light;According to the depth image, the characteristic information of the destination object is determined;Whether include initial model corresponding with the destination object in judgment models storehouse;If so, being then modified according to the characteristic information of the destination object to the initial model, the destination object is generated Corresponding personalized three-dimensional model.
- 2. the method as described in claim 1, it is characterised in that whether have in the judgment models storehouse and the destination object pair The target initial model answered, including:According to the depth image, the attribute information of the destination object is determined;Whether with the attribute information of the destination object corresponding initial model is had in judgment models storehouse.
- 3. method as claimed in claim 2, it is characterised in that whether have in the judgment models storehouse and the destination object Before target initial model corresponding to attribute information, in addition to:It is determined that the attribute information identical candidate target collection with the destination object;Each depth image according to corresponding to the candidate target concentrates each object difference, generates the initial model.
- 4. method as claimed in claim 3, it is characterised in that described to concentrate each object to correspond to respectively according to the candidate target Each depth image, generate the initial model, including:To each object, corresponding each depth image is handled respectively, obtains the intensive cloud data of each depth image;According to the intensive cloud data, a cloud mesh reconstruction is carried out to each object;Each depth image after the reconstruction is merged, registration, generate the initial model.
- A kind of 5. personalized three-dimensional model generating means, it is characterised in that including:Acquisition module, for obtaining the depth image of destination object using structure light;First determining module, for according to the depth image, determining the characteristic information of the destination object;Judge module, for whether including initial model corresponding with the destination object in judgment models storehouse;Processing module, if including initial model corresponding with the destination object for model library, according to the target pair The characteristic information of elephant, the initial model is modified, generates personalized three-dimensional model corresponding to the destination object.
- 6. device as claimed in claim 5, it is characterised in that the judge module, be specifically used for:According to the depth image, the attribute information of the destination object is determined;Whether with the attribute information of the destination object corresponding initial model is had in judgment models storehouse.
- 7. device as claimed in claim 6, it is characterised in that also include:Second determining module, for determining the attribute information identical candidate target collection with the destination object;Generation module, for each depth image according to corresponding to each object difference of candidate target concentration, generate described initial Model.
- 8. device as claimed in claim 7, it is characterised in that the generation module, be specifically used for:To each object, corresponding each depth image is handled respectively, obtains the intensive cloud data of each depth image;According to the intensive cloud data, a cloud mesh reconstruction is carried out to each object;Each depth image after the reconstruction is merged, registration, generate the initial model.
- A kind of 9. terminal device, it is characterised in that including:Processor, memory and image processing circuit;Wherein, the memory is used to store executable program code;The processor is stored by reading in the memory Executable program code, and described image process circuit output depth image, to realize such as any institute in claim 1-4 The personalized three-dimensional model generating method stated.
- 10. a kind of computer-readable recording medium, is stored thereon with computer program, it is characterised in that the program is by processor The personalized three-dimensional model generating method as described in any in claim 1-4 is realized during execution.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710642898.6A CN107507269A (en) | 2017-07-31 | 2017-07-31 | Personalized three-dimensional model generating method, device and terminal device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710642898.6A CN107507269A (en) | 2017-07-31 | 2017-07-31 | Personalized three-dimensional model generating method, device and terminal device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107507269A true CN107507269A (en) | 2017-12-22 |
Family
ID=60689364
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710642898.6A Pending CN107507269A (en) | 2017-07-31 | 2017-07-31 | Personalized three-dimensional model generating method, device and terminal device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107507269A (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109146969A (en) * | 2018-08-01 | 2019-01-04 | 北京旷视科技有限公司 | Pedestrian's localization method, device and processing equipment and its storage medium |
CN109191552A (en) * | 2018-08-16 | 2019-01-11 | Oppo广东移动通信有限公司 | Threedimensional model processing method, device, electronic equipment and storage medium |
CN109299510A (en) * | 2018-08-27 | 2019-02-01 | 百度在线网络技术(北京)有限公司 | Threedimensional model determines method, device and equipment |
CN109334010A (en) * | 2018-10-10 | 2019-02-15 | 成都我有科技有限责任公司 | 3D printer implementation method, device and electronic equipment |
CN110378996A (en) * | 2019-06-03 | 2019-10-25 | 国网浙江省电力有限公司温州供电公司 | Server threedimensional model generation method and generating means |
CN110992476A (en) * | 2019-12-18 | 2020-04-10 | 深圳度影医疗科技有限公司 | 3D printing method of fetus three-dimensional ultrasonic image, storage medium and ultrasonic equipment |
CN111105343A (en) * | 2018-10-26 | 2020-05-05 | Oppo广东移动通信有限公司 | Method and device for generating three-dimensional model of object |
CN112233142A (en) * | 2020-09-29 | 2021-01-15 | 深圳宏芯宇电子股份有限公司 | Target tracking method, device and computer readable storage medium |
WO2021007760A1 (en) * | 2019-07-15 | 2021-01-21 | Oppo广东移动通信有限公司 | Identity recognition method, terminal device and computer storage medium |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1482580A (en) * | 2002-09-15 | 2004-03-17 | �����з��ѿƼ�����˾ | Method for forming new three-dimensional model using a group of two-dimensional photos and three-dimensional library |
CN102156810A (en) * | 2011-03-30 | 2011-08-17 | 北京触角科技有限公司 | Augmented reality real-time virtual fitting system and method thereof |
CN104008571A (en) * | 2014-06-12 | 2014-08-27 | 深圳奥比中光科技有限公司 | Human body model obtaining method and network virtual fitting system based on depth camera |
CN104376599A (en) * | 2014-12-11 | 2015-02-25 | 苏州丽多网络科技有限公司 | Handy three-dimensional head model generation system |
CN104599314A (en) * | 2014-06-12 | 2015-05-06 | 深圳奥比中光科技有限公司 | Three-dimensional model reconstruction method and system |
CN104732585A (en) * | 2015-03-23 | 2015-06-24 | 腾讯科技(深圳)有限公司 | Human body type reconstructing method and device |
CN105513114A (en) * | 2015-12-01 | 2016-04-20 | 深圳奥比中光科技有限公司 | Three-dimensional animation generation method and device |
CN106355610A (en) * | 2016-08-31 | 2017-01-25 | 杭州远舟医疗科技有限公司 | Three-dimensional human body surface reconstruction method and device |
CN106952334A (en) * | 2017-02-14 | 2017-07-14 | 深圳奥比中光科技有限公司 | The creation method of the net model of human body and three-dimensional fitting system |
-
2017
- 2017-07-31 CN CN201710642898.6A patent/CN107507269A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1482580A (en) * | 2002-09-15 | 2004-03-17 | �����з��ѿƼ�����˾ | Method for forming new three-dimensional model using a group of two-dimensional photos and three-dimensional library |
CN102156810A (en) * | 2011-03-30 | 2011-08-17 | 北京触角科技有限公司 | Augmented reality real-time virtual fitting system and method thereof |
CN104008571A (en) * | 2014-06-12 | 2014-08-27 | 深圳奥比中光科技有限公司 | Human body model obtaining method and network virtual fitting system based on depth camera |
CN104599314A (en) * | 2014-06-12 | 2015-05-06 | 深圳奥比中光科技有限公司 | Three-dimensional model reconstruction method and system |
CN104376599A (en) * | 2014-12-11 | 2015-02-25 | 苏州丽多网络科技有限公司 | Handy three-dimensional head model generation system |
CN104732585A (en) * | 2015-03-23 | 2015-06-24 | 腾讯科技(深圳)有限公司 | Human body type reconstructing method and device |
CN105513114A (en) * | 2015-12-01 | 2016-04-20 | 深圳奥比中光科技有限公司 | Three-dimensional animation generation method and device |
CN106355610A (en) * | 2016-08-31 | 2017-01-25 | 杭州远舟医疗科技有限公司 | Three-dimensional human body surface reconstruction method and device |
CN106952334A (en) * | 2017-02-14 | 2017-07-14 | 深圳奥比中光科技有限公司 | The creation method of the net model of human body and three-dimensional fitting system |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109146969B (en) * | 2018-08-01 | 2021-01-26 | 北京旷视科技有限公司 | Pedestrian positioning method, device and processing equipment and storage medium thereof |
CN109146969A (en) * | 2018-08-01 | 2019-01-04 | 北京旷视科技有限公司 | Pedestrian's localization method, device and processing equipment and its storage medium |
CN109191552A (en) * | 2018-08-16 | 2019-01-11 | Oppo广东移动通信有限公司 | Threedimensional model processing method, device, electronic equipment and storage medium |
CN109299510B (en) * | 2018-08-27 | 2020-06-30 | 百度在线网络技术(北京)有限公司 | Three-dimensional model determination method, device and equipment |
CN109299510A (en) * | 2018-08-27 | 2019-02-01 | 百度在线网络技术(北京)有限公司 | Threedimensional model determines method, device and equipment |
CN109334010A (en) * | 2018-10-10 | 2019-02-15 | 成都我有科技有限责任公司 | 3D printer implementation method, device and electronic equipment |
CN109334010B (en) * | 2018-10-10 | 2021-07-23 | 成都我有科技有限责任公司 | 3D printer implementation method and device and electronic equipment |
CN111105343A (en) * | 2018-10-26 | 2020-05-05 | Oppo广东移动通信有限公司 | Method and device for generating three-dimensional model of object |
CN111105343B (en) * | 2018-10-26 | 2023-06-09 | Oppo广东移动通信有限公司 | Method and device for generating three-dimensional model of object |
CN110378996A (en) * | 2019-06-03 | 2019-10-25 | 国网浙江省电力有限公司温州供电公司 | Server threedimensional model generation method and generating means |
WO2021007760A1 (en) * | 2019-07-15 | 2021-01-21 | Oppo广东移动通信有限公司 | Identity recognition method, terminal device and computer storage medium |
CN110992476A (en) * | 2019-12-18 | 2020-04-10 | 深圳度影医疗科技有限公司 | 3D printing method of fetus three-dimensional ultrasonic image, storage medium and ultrasonic equipment |
CN110992476B (en) * | 2019-12-18 | 2021-07-13 | 深圳度影医疗科技有限公司 | 3D printing method of fetus three-dimensional ultrasonic image, storage medium and ultrasonic equipment |
CN112233142A (en) * | 2020-09-29 | 2021-01-15 | 深圳宏芯宇电子股份有限公司 | Target tracking method, device and computer readable storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107507269A (en) | Personalized three-dimensional model generating method, device and terminal device | |
US9317970B2 (en) | Coupled reconstruction of hair and skin | |
CN107481304A (en) | The method and its device of virtual image are built in scene of game | |
CN107610077A (en) | Image processing method and device, electronic installation and computer-readable recording medium | |
CN107481317A (en) | The facial method of adjustment and its device of face 3D models | |
CN107707839A (en) | Image processing method and device | |
CN107592449A (en) | Three-dimension modeling method, apparatus and mobile terminal | |
CN107734267A (en) | Image processing method and device | |
CN107452034A (en) | Image processing method and its device | |
CN107509045A (en) | Image processing method and device, electronic installation and computer-readable recording medium | |
CN107343148B (en) | Image completion method, apparatus and terminal | |
CN107707831A (en) | Image processing method and device, electronic installation and computer-readable recording medium | |
CN107483845A (en) | Photographic method and its device | |
CN107480612A (en) | Recognition methods, device and the terminal device of figure action | |
CN107707835A (en) | Image processing method and device, electronic installation and computer-readable recording medium | |
CN107707838A (en) | Image processing method and device | |
CN107517346A (en) | Photographic method, device and mobile device based on structure light | |
CN107705278A (en) | The adding method and terminal device of dynamic effect | |
CN107481318A (en) | Replacement method, device and the terminal device of user's head portrait | |
CN107509043A (en) | Image processing method and device | |
CN107610078A (en) | Image processing method and device | |
CN107469355A (en) | Game image creation method and device, terminal device | |
CN107644440A (en) | Image processing method and device, electronic installation and computer-readable recording medium | |
CN107705356A (en) | Image processing method and device | |
CN107613228A (en) | The adding method and terminal device of virtual dress ornament |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20171222 |
|
RJ01 | Rejection of invention patent application after publication |