CN109089105B - Model generation device and method based on depth perception coding - Google Patents

Model generation device and method based on depth perception coding Download PDF

Info

Publication number
CN109089105B
CN109089105B CN201811008355.XA CN201811008355A CN109089105B CN 109089105 B CN109089105 B CN 109089105B CN 201811008355 A CN201811008355 A CN 201811008355A CN 109089105 B CN109089105 B CN 109089105B
Authority
CN
China
Prior art keywords
depth perception
target
initial image
module
code
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811008355.XA
Other languages
Chinese (zh)
Other versions
CN109089105A (en
Inventor
吴跃华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Yujing Information Technology Co.,Ltd.
Original Assignee
Angrui Shanghai Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Angrui Shanghai Information Technology Co Ltd filed Critical Angrui Shanghai Information Technology Co Ltd
Priority to CN201811008355.XA priority Critical patent/CN109089105B/en
Publication of CN109089105A publication Critical patent/CN109089105A/en
Application granted granted Critical
Publication of CN109089105B publication Critical patent/CN109089105B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
  • Image Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The invention discloses a model generation device and a method based on depth perception coding, wherein the model generation device comprises a 3D camera and a processing end, the processing end comprises an acquisition module, a processing module and a generation module, the processing end also comprises a database, the database comprises a plurality of depth perception codes, and the 3D camera is used for acquiring an initial image of a shooting target; the acquisition module is used for acquiring a target depth perception code matched with the initial image from the database; the processing module is used for adjusting parameters of the target depth perception coding according to the initial image; and the generating module is used for generating a 3D model of the shooting target according to the target depth perception code with the adjusted parameters. The invention can digitize 3D images, generate a closed and complete 3D model through a pair of 3D images, and make the acquired 3D images easier to manage and control, and can reduce the resources consumed by operation and reduce the space occupied by the 3D images.

Description

Model generation device and method based on depth perception coding
Technical Field
The invention relates to a model generation device and method based on depth perception coding.
Background
The 3D camera, which is manufactured by using a 3D lens, generally has two or more image pickup lenses, and has a pitch close to the pitch of human eyes, and can capture different images of the same scene seen by similar human eyes. The holographic 3D has a disc 5 above the lens, and can view the same image in all directions through dot grating imaging or -shaped grating holographic imaging, such as being in the environment.
The first 3D camera to date the 3D revolution has all been around the hollywood heavy-pound large and major sporting events. With the advent of 3D cameras, this technology is one step closer to home users. After the camera is introduced, each memorable moment of the life, such as the first step taken by a child, a university graduation celebration and the like, can be captured by using a 3D lens in the future.
A 3D camera typically has more than two lenses. The 3D camera functions like a human brain, and can fuse two lens images together to form a 3D image. These images can be played on a 3D television, and can be viewed by viewers wearing so-called actively shuttered glasses, or directly viewed by naked-eye 3D display devices. The 3D shutter glasses can rapidly alternately open and close the lenses of the left and right glasses at a rate of 60 times per second. This means that each eye sees a slightly different picture of the same scene, so the brain can thus think that it is enjoying a single picture in 3D.
The images acquired by the existing 3D camera are not easy to process and control, and a single 3D camera is inconvenient to model.
Disclosure of Invention
The invention aims to overcome the defects that images acquired by a 3D camera are not easy to process and control and inconvenient to model in the prior art, and provides a model generation device and a model generation method based on depth perception coding, which can digitize 3D images, generate a closed and complete 3D model through a pair of 3D images, enable the acquired 3D images to be easier to manage and control, reduce the resources consumed by operation and reduce the space occupied by the 3D images.
The invention solves the technical problems through the following technical scheme:
a model generation device based on depth perception coding is characterized in that the model generation device comprises a 3D camera and a processing end, the processing end comprises an acquisition module, a processing module and a generation module, the processing end further comprises a database, the database comprises a plurality of depth perception codes,
the 3D camera is used for acquiring an initial image of a shooting target;
the acquisition module is used for acquiring a target depth perception code matched with the initial image from the database;
the processing module is used for adjusting parameters of the target depth perception coding according to the initial image;
the generating module is used for generating the 3D model of the shooting target according to the target depth perception code with the adjusted parameters.
The depth perception code is a digital point cloud which can be edited, optimized and simplified, the perception code can be a preset digital point cloud, each digital point in the perception code can have a label, and a certain conduction relation exists between the digital points.
The parameter may be a parameter for adjusting a shape, and since the depth-sensing coding can adjust a shape of a space, adjusting the shape of the space of the depth-sensing coding to be the same as that of the initial image obtains the depth-sensing coding of the initial image.
Preferably, the processing end further comprises an identification module,
the identification module is used for identifying the position of the initial image in the 3D model;
and the processing module is used for adjusting the parameters of the target depth perception coding and the region corresponding to the position according to the initial image.
Preferably, each depth perception code in the database comprises a plurality of observation regions, each observation region observes a region obtained by the depth perception code through a virtual observation point in space,
the identification module is used for identifying a target observation area corresponding to the initial image;
the acquisition module is used for acquiring a target depth perception code closest to the initial image shape by searching a target observation region of each depth perception code;
and the processing module is used for adjusting the parameters of the target observation region of the target depth perception coding according to the initial image.
Preferably, the processing end further comprises a placing module,
the placement module is used for placing the depth perception code and the initial image in an overlapping mode to obtain the distance from an image point on the initial image to the depth perception code;
for one depth perception code, the obtaining module is configured to add distances of each image point of the depth perception code to obtain an overall matching value, and the depth perception code with the minimum overall matching value is the target depth perception code.
Preferably, the depth perception code includes a pixel layer and a structural layer, the depth perception code is provided with a plurality of control points for controlling the shape of the structural layer, and the processing module is configured to adjust the control points according to the shape of the initial image to adjust the parameters of the target depth perception code.
Preferably, the processing end comprises a placing module,
the placing module is used for placing the target depth perception code and the initial image in an overlapping mode to obtain the vertical distance from a control point on the target depth perception code to the initial image;
the processing module is further configured to acquire the control point with the largest distance as a target control point, and move the target control point by the distance towards the direction of the initial image;
the processing module is further configured to move the peripheral control points around the target control point in the direction of the initial image by an adjustment distance, the adjustment distance of each peripheral control point is inversely proportional to the distance from the peripheral control point to the target control point, and the adjustment distance is smaller than the movement distance of the target control point.
The invention also provides a model generation method based on digital images, which is characterized in that the model generation method is realized by a model generation device, the model generation device comprises a 3D camera and a processing end, the processing end comprises an acquisition module, a processing module and a generation module, the processing end further comprises a database, the database comprises a plurality of depth perception codes, and the model generation method comprises the following steps:
the 3D camera acquires an initial image of a shooting target;
the acquisition module acquires a target depth perception code matched with the initial image from the database;
the processing module adjusts parameters of the target depth perception coding according to the initial image;
and the generating module generates a 3D model of the shooting target according to the target depth perception code with the adjusted parameters.
Preferably, the processing end further includes an identification module, and the model generation method includes:
the identification module identifies a position of the initial image in the 3D model;
and the processing module adjusts the parameters of the target depth perception coding and the region corresponding to the position according to the initial image.
Preferably, each depth-aware coding in the database includes a plurality of observation regions, each observation region is a region obtained by observing the depth-aware coding through a virtual observation point in space, and the model generation method includes:
the identification module identifies a target observation area corresponding to the initial image;
the acquisition module acquires a target depth perception code which is closest to the initial image shape by searching a target observation area of each depth perception code;
and the processing module adjusts the parameters of the target observation region of the target depth perception coding according to the initial image.
Preferably, the processing end further includes a placement module, and the model generation method includes:
the placing module is used for placing the depth perception code and the initial image in an overlapping mode to obtain the distance from an image point on the initial image to the depth perception code;
for one depth perception code, the obtaining module adds the distances of each image point of the depth perception code to obtain an overall matching value, and the depth perception code with the minimum overall matching value is the target depth perception code. On the basis of the common knowledge in the field, the above preferred conditions can be combined randomly to obtain the preferred embodiments of the invention.
The positive progress effects of the invention are as follows:
the invention can digitize 3D images, generate a closed and complete 3D model through a pair of 3D images, and make the acquired 3D images easier to manage and control, and can reduce the resources consumed by operation and reduce the space occupied by the 3D images.
Drawings
Fig. 1 is a flowchart of a model generation method according to embodiment 1 of the present invention.
Detailed Description
The invention is further illustrated by the following examples, which are not intended to limit the scope of the invention.
Example 1
The embodiment provides a model generation device based on depth perception coding, and the model generation device comprises a 3D camera and a processing end.
The processing end comprises an acquisition module, a processing module, a generation module and an identification module, and can be a computer terminal, a mobile phone terminal or a cloud server.
The processing end further comprises a database, and the database comprises a plurality of depth perception codes. Depth-aware coding is canonical data, and images are in units of pels (image points) that can be edited to change parameters.
The depth perception coding is composed of vector units, and the depth perception coding establishes lines and intersection points of the lines through the vector units.
The vector diagram is used for drawing the graph according to geometric characteristics, the vector can be a point or a line, the vector diagram can be generated only by software, the internal space occupied by the file is small, and the image file of the type comprises independent and separated images and can be freely recombined without limit. It features that the amplified image is not distorted and has no relation to resolution, so it is suitable for graphic design, character design, some sign design and format design.
The 3D camera is used for acquiring an initial image of a shooting target;
the acquisition module is used for acquiring a target depth perception code matched with the initial image from the database;
the processing module is used for adjusting parameters of the target depth perception coding according to the initial image;
the generating module is used for generating the 3D model of the shooting target according to the target depth perception code with the adjusted parameters.
According to the embodiment, a closed space model can be generated through a pair of 3D images, so that the generation speed of the model can be increased.
The identification module is used for identifying the position of the initial image in the 3D model;
and the processing module is used for adjusting the parameters of the target depth perception coding and the region corresponding to the position according to the initial image.
In order to quickly search the matched target depth perception code, the present embodiment can acquire a shooting angle and which five sense organs, positions, and the like are included in the initial image through the identification of the initial image. For example, the initial image is taken by a frontal shot, including eyes, nose and mouth, and the acquisition module is used to find the matching target digitized image in the database by looking up the frontal image of the 3D model, i.e. without calculating the back of the head.
The matched depth perception code is the depth perception code which is closest to the initial image space shape.
Further, each depth perception code in the database comprises a plurality of observation regions, and each observation region observes a region obtained by the depth perception code through a virtual observation point in the space.
The depth perception coding can be marked into a plurality of overlapped areas through the observation area, the target observation area can be obtained faster through the comparison between the initial image and the observation area, the initial image can obtain a shooting angle, and the shooting angle is the area under the observation point.
The identification module is used for identifying a target observation area corresponding to the initial image;
the acquisition module is used for acquiring a target depth perception code closest to the initial image shape by searching a target observation region of each depth perception code;
and the processing module is used for adjusting the parameters of the target observation region of the target depth perception coding according to the initial image.
In order to obtain the depth perception code with the closest shape, the processing end further comprises a placing module.
The placement module is used for placing the depth perception code and the initial image in an overlapping mode to obtain the distance from the image point on the initial image to the depth perception code.
In the present invention, the distance from the control point on the digitized image, the image point to the initial image refers to the distance between the corresponding points, such as the distance from the nose tip to the nose tip, the distance from the mouth corner to the mouth corner, and the like.
For one depth perception code, the obtaining module is configured to add distances of each image point of the depth perception code to obtain an overall matching value, and the depth perception code with the minimum overall matching value is the target depth perception code.
Referring to fig. 1, with the above model generating apparatus, the present embodiment further provides a model generating method, including:
step 100, the 3D camera acquires an initial image of a shooting target;
step 101, the identification module identifies the position of the initial image in the 3D model; .
In step 101, the identification module identifies a target observation region corresponding to the initial image to mark a position of the initial image in the 3D model.
And 102, the obtaining module obtains the target depth perception code matched with the initial image from the database.
In step 102, the specific steps of target depth perception coding acquisition are as follows:
the placing module is used for placing the depth perception code and the initial image in an overlapping mode to obtain the distance from an image point on the initial image to the depth perception code;
for one depth perception code, the obtaining module adds the distances of each image point of the depth perception code to obtain an overall matching value, and the depth perception code with the minimum overall matching value is the target depth perception code.
Specifically, the obtaining module obtains a target depth perception code closest to the initial image shape by searching a target observation region of each depth perception code, and searches a depth perception code with a minimum matching value in the observation region.
Step 103, the processing module adjusts parameters of the region corresponding to the position of the target depth perception code according to the initial image.
And 104, generating a 3D model of the shooting target by the generation module according to the target depth perception code with the adjusted parameters.
Example 2
This embodiment is substantially the same as embodiment 1 except that:
the embodiment provides a specific parameter adjusting mode.
The depth perception code comprises a pixel layer and a structural layer, a plurality of control points used for controlling the shape of the structural layer are arranged on the depth perception code, and the processing module is used for adjusting the control points according to the shape of the initial image so as to adjust the parameters of the target depth perception code.
Specifically, the placement module is configured to place the target depth perception code in an overlapping manner with the initial image to obtain a vertical distance from a control point on the target depth perception code to the initial image;
the processing module is further configured to acquire the control point with the largest distance as a target control point, and move the target control point by the distance towards the direction of the initial image;
the processing module is further configured to move the peripheral control points around the target control point in the direction of the initial image by an adjustment distance, the adjustment distance of each peripheral control point is inversely proportional to the distance from the peripheral control point to the target control point, and the adjustment distance is smaller than the movement distance of the target control point.
The control points correspond to pixels (image points), for example, 5000 pixels of a depth perception code are provided, wherein 1000 key points are provided, and the thousand key points correspond to the control points one by one. Controlling the movement of the control point can control the movement of the pixel point in the space, so that the depth perception code closest to the initial image is closer to the initial image.
The scheme is executed circularly until the distances from all the control points to the initial image are smaller than the preset distance, and then the process is ended.
While specific embodiments of the invention have been described above, it will be appreciated by those skilled in the art that these are by way of example only, and that the scope of the invention is defined by the appended claims. Various changes and modifications to these embodiments may be made by those skilled in the art without departing from the spirit and scope of the invention, and these changes and modifications are within the scope of the invention.

Claims (8)

1. A model generation device based on depth perception coding is characterized in that the model generation device comprises a 3D camera and a processing end, the processing end comprises an acquisition module, a processing module and a generation module, the processing end further comprises a database, the database comprises a plurality of depth perception codes,
the 3D camera is used for acquiring an initial image of a shooting target;
the acquisition module is used for acquiring a target depth perception code matched with the initial image from the database;
the processing module is used for adjusting parameters of the target depth perception coding according to the initial image;
the generating module is used for generating a 3D model of the shooting target according to the target depth perception code with the adjusted parameters;
the processing end also comprises an identification module,
the identification module is used for identifying the position of the initial image in the 3D model;
and the processing module is used for adjusting the parameters of the target depth perception coding and the region corresponding to the position according to the initial image.
2. The model generation apparatus of claim 1, wherein each depth perception code in the database comprises a plurality of observation regions, each observation region observing a region of the depth perception code through a virtual observation point in space,
the identification module is used for identifying a target observation area corresponding to the initial image;
the acquisition module is used for acquiring a target depth perception code closest to the initial image shape by searching a target observation region of each depth perception code;
and the processing module is used for adjusting the parameters of the target observation region of the target depth perception coding according to the initial image.
3. The model generation apparatus of claim 1, wherein the processing side further comprises a placement module,
the placement module is used for placing the depth perception code and the initial image in an overlapping mode to obtain the distance from an image point on the initial image to the depth perception code;
for one depth perception code, the obtaining module is configured to add distances of each image point of the depth perception code to obtain an overall matching value, and the depth perception code with the minimum overall matching value is the target depth perception code.
4. The model generation apparatus of claim 1, wherein the depth-aware coding comprises a pixel layer and a structural layer, the depth-aware coding is provided with a plurality of control points for controlling the shape of the structural layer, and the processing module is configured to adjust the control points according to the shape of the initial image to adjust the parameters of the target depth-aware coding.
5. The model generation apparatus of claim 4, wherein the processing side comprises a placement module,
the placing module is used for placing the target depth perception code and the initial image in an overlapping mode to obtain the vertical distance from a control point on the target depth perception code to the initial image;
the processing module is further configured to acquire the control point with the largest distance as a target control point, and move the target control point by the distance towards the direction of the initial image;
the processing module is further configured to move the peripheral control points around the target control point in the direction of the initial image by an adjustment distance, the adjustment distance of each peripheral control point is inversely proportional to the distance from the peripheral control point to the target control point, and the adjustment distance is smaller than the movement distance of the target control point.
6. A model generation method based on digital images is characterized in that the model generation method is realized through a model generation device, the model generation device comprises a 3D camera and a processing end, the processing end comprises an acquisition module, a processing module and a generation module, the processing end further comprises a database, the database comprises a plurality of depth perception codes, and the model generation method comprises the following steps:
the 3D camera acquires an initial image of a shooting target;
the acquisition module acquires a target depth perception code matched with the initial image from the database;
the processing module adjusts parameters of the target depth perception coding according to the initial image;
the generation module generates a 3D model of the shooting target according to the target depth perception code with the adjusted parameters;
the processing end further comprises an identification module, and the model generation method comprises the following steps:
the identification module identifies a position of the initial image in the 3D model;
and the processing module adjusts the parameters of the target depth perception coding and the region corresponding to the position according to the initial image.
7. The model generation method of claim 6, wherein each depth-aware coding in the database comprises a plurality of observation regions, each observation region being a region obtained by observing the depth-aware coding through a virtual observation point in space, the model generation method comprising:
the identification module identifies a target observation area corresponding to the initial image;
the acquisition module acquires a target depth perception code which is closest to the initial image shape by searching a target observation area of each depth perception code;
and the processing module adjusts the parameters of the target observation region of the target depth perception coding according to the initial image.
8. The model generation method of claim 6, wherein the processing side further comprises a placement module, the model generation method comprising:
the placing module is used for placing the depth perception code and the initial image in an overlapping mode to obtain the distance from an image point on the initial image to the depth perception code;
for one depth perception code, the obtaining module adds the distances of each image point of the depth perception code to obtain an overall matching value, and the depth perception code with the minimum overall matching value is the target depth perception code.
CN201811008355.XA 2018-08-31 2018-08-31 Model generation device and method based on depth perception coding Active CN109089105B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811008355.XA CN109089105B (en) 2018-08-31 2018-08-31 Model generation device and method based on depth perception coding

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811008355.XA CN109089105B (en) 2018-08-31 2018-08-31 Model generation device and method based on depth perception coding

Publications (2)

Publication Number Publication Date
CN109089105A CN109089105A (en) 2018-12-25
CN109089105B true CN109089105B (en) 2020-06-23

Family

ID=64840426

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811008355.XA Active CN109089105B (en) 2018-08-31 2018-08-31 Model generation device and method based on depth perception coding

Country Status (1)

Country Link
CN (1) CN109089105B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107123160A (en) * 2017-05-02 2017-09-01 成都通甲优博科技有限责任公司 Simulation lift face system, method and mobile terminal based on three-dimensional image
CN108305312A (en) * 2017-01-23 2018-07-20 腾讯科技(深圳)有限公司 The generation method and device of 3D virtual images
CN108389253A (en) * 2018-02-07 2018-08-10 盎锐(上海)信息科技有限公司 Mobile terminal with modeling function and model generating method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI553565B (en) * 2014-09-22 2016-10-11 銘傳大學 Utilizing two-dimensional image to estimate its three-dimensional face angle method, and its database establishment of face replacement and face image replacement method
US10528801B2 (en) * 2016-12-07 2020-01-07 Keyterra LLC Method and system for incorporating contextual and emotional visualization into electronic communications

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108305312A (en) * 2017-01-23 2018-07-20 腾讯科技(深圳)有限公司 The generation method and device of 3D virtual images
CN107123160A (en) * 2017-05-02 2017-09-01 成都通甲优博科技有限责任公司 Simulation lift face system, method and mobile terminal based on three-dimensional image
CN108389253A (en) * 2018-02-07 2018-08-10 盎锐(上海)信息科技有限公司 Mobile terminal with modeling function and model generating method

Also Published As

Publication number Publication date
CN109089105A (en) 2018-12-25

Similar Documents

Publication Publication Date Title
CN105578019B (en) Image extraction system capable of obtaining depth information and focusing method
US10382699B2 (en) Imaging system and method of producing images for display apparatus
JP5320524B1 (en) Stereo camera
WO2012029298A1 (en) Image capture device and image-processing method
CN108600729B (en) Dynamic 3D model generation device and image generation method
CN108347505B (en) Mobile terminal with 3D imaging function and image generation method
KR20120083486A (en) Image processing device, image processing method, image processing program, and recording medium
WO2015192547A1 (en) Method for taking three-dimensional picture based on mobile terminal, and mobile terminal
US11256214B2 (en) System and method for lightfield capture
CN108391116B (en) Whole body scanning device and method based on 3D imaging technology
US8072487B2 (en) Picture processing apparatus, picture recording apparatus, method and program thereof
KR101549929B1 (en) Method and apparatus of generating depth map
WO2022047701A1 (en) Image processing method and apparatus
KR20150091064A (en) Method and system for capturing a 3d image using single camera
CN108513122B (en) Model adjusting method and model generating device based on 3D imaging technology
KR102261544B1 (en) Streaming server and method for object processing in multi-view video using the same
CN105282534B (en) For being embedded in the system and method for stereo-picture
CN109636926B (en) 3D global free deformation method and device
CN109089105B (en) Model generation device and method based on depth perception coding
CN108737808B (en) 3D model generation device and method
CN111161399A (en) Data processing method and component for generating three-dimensional model based on two-dimensional image
CN110876050B (en) Data processing device and method based on 3D camera
CN109272453B (en) Modeling device and positioning method based on 3D camera
CN111629194B (en) Method and system for converting panoramic video into 6DOF video based on neural network
CN109657702B (en) 3D depth semantic perception method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20230406

Address after: 518000 1101-g1, BIC science and technology building, No. 9, scientific research road, Maling community, Yuehai street, Nanshan District, Shenzhen, Guangdong Province

Patentee after: Shenzhen Yujing Information Technology Co.,Ltd.

Address before: 201703 No.206, building 1, no.3938 Huqingping Road, Qingpu District, Shanghai

Patentee before: UNRE (SHANGHAI) INFORMATION TECHNOLOGY Co.,Ltd.

TR01 Transfer of patent right