CN202634612U - Cyber-physical image fusion device - Google Patents

Cyber-physical image fusion device Download PDF

Info

Publication number
CN202634612U
CN202634612U CN 201220289294 CN201220289294U CN202634612U CN 202634612 U CN202634612 U CN 202634612U CN 201220289294 CN201220289294 CN 201220289294 CN 201220289294 U CN201220289294 U CN 201220289294U CN 202634612 U CN202634612 U CN 202634612U
Authority
CN
China
Prior art keywords
camera
image
display
moving block
rectangle support
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN 201220289294
Other languages
Chinese (zh)
Inventor
何卫平
张衡
林清松
***
王伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN 201220289294 priority Critical patent/CN202634612U/en
Application granted granted Critical
Publication of CN202634612U publication Critical patent/CN202634612U/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The utility model provides a cyber-physical image fusion device which comprises an eyeglass type three-dimensional (3D) display, binocular cameras, an image processing device and characteristic cubes, wherein the eyeglass type 3D display comprises a left eye display and a right eye display, the binocular cameras are two miniature CCD (Charge Coupled Device) cameras and mounted in camera fixing devices, the camera fixing devices comprise rectangular brackets and spherical rotating blocks and are mounted on a camera locating board, and the camera locating board is clamped on the eyeglass type 3D display; a trapezoidal sliding chute matched with a trapezoidal slide block is formed on the outer side face of the camera locating board; and image signals of two channels are acquired synchronously by the binocular cameras, converted into digital image signals through the image processing device, subjected to cyber-physical image fusion and then output to the eyeglass type 3D display, so that the left-and-right-mode play and display of 3D images are realized. According to the cyber-physical image fusion device, the 3D simulation of an assembly environment is realized based on a 3D display manner, thus operators are enabled to be seemed to be personally on the scene.

Description

A kind of information physical image fusing device
Technical field
The utility model relates to the 3D Display Technique field of CPS (Cyber-Physical Systems information physics emerging system) and image processing techniques, is specially a kind of information physical image fusing device, realizes that threedimensional model and multi-angle in kind merge demonstration.
Background technology
(Cyber-Physical Systems CPS) is a kind of mutual cooperation through correlation techniques such as calculating, communication and controls to information physics emerging system, realizes the novel embedded system that the information world and the physical world degree of depth merge.CPS can carry out the adjustment and the control of real-time high-efficiency to each physical entity according to each characteristic point information in the physical environment, has promoted virtual assembly technology development.
Set up one with the corresponding to virtual assembly environment of practical set production environment, operations such as the assembling of carrying out product alternately through the information physical image, dismounting are the significant process of virtual assembling.The modeling pattern of present virtual assembly environment is mainly concerned with CAD 3D software modeling and virtual demonstration modeling language; Interactive mode is mainly concerned with data glove, position tracker, menu and dialog box mode.Data glove and position tracker are caught the action of pointing through built-in transducer, thereby carry out gesture identification gesture are mapped as corresponding order and operation, accomplish the emulation of assembling process; Menu and dialog box all adopt keyboard and mouse action, realize the control to the assembling scene, accomplish the assembling process of virtual product.Be entitled as " structure of the virtual mounting plate of built-up jig under the network environment " as the Wei Yuanyuan on " building-block machine and automation process technology " 08 phase in 2011 etc. delivers introduced a kind of virtual assembly environment that makes up based on virtual demonstration modeling language; It adopts the interactive mode of menu and dialog box, and realizes the control to scene through Java language.
According to present research, main modeling methods and interactive mode exist following deficiency all less than the three-dimensional simulation that realizes assembly environment when making up virtual assembly environment:
1) assembling process of part receives the influence of assembly tool, built-up jig and other assembly environment information usually; And the constructing technology of assembly environment only relates to the threedimensional model building process of part at present, and reckons without the influence of other assembly environments such as assembly tool and built-up jig;
2) interaction technique that relates to of present virtual assembling relies on transducer and the dialog box menu to realize the simulation process of assembling fully, does not realize the fusion of information world and physical world.Assembly simulation under the two dimensional surface environment has been ignored the man-machine interaction process, very difficult realization three-dimensional assembling process truly simultaneously.
Summary of the invention
The technical problem that solves
For solving the deficiency of present virtual mounting technology; The utility model proposes a kind of information physical image fusing device, realizes the solid artificial of assembly environment making the operator that sensation on the spot in person arranged based on the stereo display mode; And the position through assembly unit in kind in the binocular vision perception scene; Set up virtual assembly unit model, accomplish the fusion of virtual assembly unit and assembly unit in kind, realize the associating of information world and physical world.
Technical scheme
The technical scheme of the utility model is:
Said a kind of information physical image fusing device is characterized in that: comprise eyeglass stereoscopic display, binocular camera, image processing apparatus and feature cube;
The eyeglass stereoscopic display comprises left eye display and right eye display;
Said binocular camera is two miniature CCD camera, and the resolution of each camera should be not less than 640*480, is used for anthropomorphic dummy's eyes images acquired;
Camera is installed in the camera fixing device, and camera fixing device comprises rectangle support and spherical moving block; On rectangle support one end face trapezoidal slide block is arranged; Rectangle support has ball recess on the other end; Spherical moving block is installed in the ball recess; And spherical moving block is connected through rotating shaft with rectangle support, and the spherical moving block centre of sphere is crossed in rotating shaft, and shaft parallel is in the rectangle support end face and perpendicular to the glide direction of rectangle support upper trapezoid slide block; On spherical moving block direction outwardly, the camera installing hole is arranged, and camera installing hole central axis and spherical moving block central axis intersect vertically; Camera is fixed in the camera installing hole, and the central axis of camera and camera installing hole central axes;
Camera fixing device is installed on the camera location-plate, and there is buckle at camera location-plate edge, is used for the camera location-plate is connected to the eyeglass stereoscopic display; The trapezoidal chute that cooperates with trapezoidal slide block is arranged on the camera location-plate lateral surface;
Feature cube plays the image registration effect, and the whole outer surface of feature cube is a solid color, and three of a summit seamed edges are respectively three kinds of colors of red, green, blue in the feature cube, and the color of three seamed edges is different from the whole color of feature cube; Said three characteristic edge that seamed edge is a feature cube;
The two-way picture signal of binocular camera synchronous acquisition is converted into data image signal through image processing apparatus, and merges through the information physical image, exports the eyeglass stereoscopic display to, realizes that the broadcast of left and right sides pattern stereo-picture shows.
Beneficial effect
Adopt the apparatus integration design in the utility model, realized:
1) binocular camera is fixedly connected on the eyeglass stereoscopic display through phase machine positioning device and camera fixing device; Formed and collected IMAQ, handled, be shown as the information physical image emerging system of one; Make Design of device ergonomic principle more, the convenient use;
2) the two-way image of video frequency collection card collection is to directly sending into buffering area; Carrying out image registration merges; System supplies video eyeglasses to show to output in the video memory of sending into video card in real time the image after the registration; Whole process has rapidly and efficiently demonstrated fully the high efficiency of information physics emerging system and the characteristics of real-time;
In sum, this utility model can be sought the characteristic object in the physical world apace, and video frequency collection card can be controlled the two-way image that the camera synchronization collection has subtense angle.In the time of the two-way IMAQ,, carry out image co-registration according to the feature cube and the cubical registration relation of the aspect of model of physical world.Two-way image after merging by about aim at and be stitched together, output in the video eyeglasses, carry out stereo display through the perspective transformations function of video eyeglasses.Based on experimental verification, the two-way image through gathering suitable subtense angle with set up threedimensional model accurately, this system can realize the rapid registering of material picture and dummy model, the function of real-time stereoscopic display finally realizes the associating of physical world and virtual world.
Description of drawings
Fig. 1: the stereogram that installs in the utility model;
Fig. 2: the structural representation of camera fixing device;
Fig. 3: the structural representation that installs in the utility model;
Fig. 4: the structural representation of phase machine positioning device;
Fig. 5: feature cube sketch map;
Wherein: 1, binocular camera; 3, camera fixing device; 6, eyeglass stereoscopic display; 7, camera location-plate; 8, trapezoidal chute; 9, rectangle support; 10, spherical moving block; 11, feature cube.
Embodiment
Specifically describe the utility model according to embodiment below:
With reference to accompanying drawing 1 and accompanying drawing 3, the device of information physical image emerging system comprises eyeglass stereoscopic display 6, binocular camera 1, image processing apparatus and feature cube 11 in the utility model.The eyeglass stereoscopic display comprises left eye display and right eye display; In the present embodiment; What the eyeglass stereoscopic display adopted is many Wrap 9000 video eyeglasses of seeing review company, supports displaying the play of left and right sides pattern anaglyph, and is connected with video card through the VGA interface.
Binocular camera is two miniature CCD camera, and the resolution of each camera should be not less than 640*480, is used for anthropomorphic dummy's eyes images acquired.In the present embodiment, the 3.6mm camera lens superminiature CCD camera that camera adopts Ka Molai electronics scientific technology co to provide, resolution is 640 * 480, angle of visual field size is 73 °, is of a size of 16mm * 16mm * 12mm.
Camera is installed in the camera fixing device 3, and with reference to accompanying drawing 4, camera fixing device 3 comprises rectangle support 9 and spherical moving block 10.On rectangle support one end face trapezoidal slide block is arranged; Rectangle support has ball recess on the other end; Spherical moving block is installed in the ball recess; And spherical moving block is connected through rotating shaft with rectangle support, and the spherical moving block centre of sphere is crossed in rotating shaft, and shaft parallel is in the rectangle support end face and perpendicular to the glide direction of rectangle support upper trapezoid slide block.In the present embodiment, rectangle support 9 sizes are 35mm * 35mm * 20mm, and the radius of ball recess is 15mm; The radius of spherical moving block 10 is 13mm.On spherical moving block direction outwardly, the camera installing hole is arranged, and camera installing hole central axis and spherical moving block central axis intersect vertically; Camera is fixed in the camera installing hole, and the central axis of camera and camera installing hole central axes.
Camera fixing device is installed on the camera location-plate 7, and with reference to accompanying drawing 2, there is buckle at camera location-plate edge, is used for the camera location-plate is connected to the eyeglass stereoscopic display; The trapezoidal chute 8 that cooperates with trapezoidal slide block is arranged on the camera location-plate lateral surface.
Feature cube plays the image registration effect; Size is 55mm * 55mm * 55mm; The whole outer surface of feature cube is a solid color; Three of a summit seamed edges are respectively three kinds of colors of red, green, blue in the feature cube, and the color of three seamed edges is different from the whole color of feature cube; Said three characteristic edge that seamed edge is a feature cube.
The two-way picture signal of binocular camera synchronous acquisition is converted into data image signal through image processing apparatus, and image processing apparatus includes image pick-up card and two-way output video card.Data image signal merges through the information physical image, exports the eyeglass stereoscopic display to, realizes that the broadcast of left and right sides pattern stereo-picture shows.What image pick-up card adopted is that dimension is looked the MV-8002 two-way image pick-up card of company; Be connected with camera through video signal interface; Convert the vision signal of binocular camera collection into data image signal, the secondary development image pick-up card makes it can the synchronous double-way images acquired; Frequency acquisition is 20 frames/s, and the interface of processes captured image is provided simultaneously; Two-way output video card is the two-way VGA output video card of NVIDIAGT210M chip, and two VGA interfaces are used to connect computer monitor and eyeglass stereoscopic display.
The information physical image fusion method that adopts in the present embodiment may further comprise the steps:
Step 1: in the model space, set up the feature cube threedimensional model based on OpenGL graphics software interface, and the required taper ken body of perspective projection is set:
Step 1.1: model coordinate systems is set in the model space overlaps with world coordinate system, and the identical feature cube threedimensional model of feature cube overall dimension in foundation and the real space, three seamed edge V on the same summit of feature cube threedimensional model are set 0V x, V 0V y, V 0V zColor attribute identical with the characteristic edge color of feature cube in the real space;
Step 1.2: according to the physical parameter of binocular camera; The relevant parameter of the required taper ken body of perspective projection transformation is set in the model space: the distance of the nearly cutting identity distance viewpoint of taper ken body is the focal length 3.6mm of camera; Nearly cutting face is set to projection plane; The perspective plane size is the big or small 9.7mm * 7.9mm of camera physics photo-sensitive cell; The distance of the back cutting identity distance viewpoint of taper ken body is 3.6+500mm, and the display window of object is 640 * 480 in the ken of the taper simultaneously body, and size is consistent with the image size that camera collection arrives;
Step 2: regulate binocular camera, the distance that makes two cameras in the binocular camera is eyeglass stereoscopic display wearer eyes interpupillary distance 63mm; Feature cube in the real space is placed in the angular field of view of camera, and makes two cameras can both photograph the characteristic edge of feature cube in the real space;
Step 3: the left image and the right image of two the synchronous continuous acquisition of camera real spaces of binocular camera, the resolution of left image and right image is 640 * 480; Adopt the following step to extract the characteristic edge of feature cube in real space left side image and the right image respectively:
Step 3.1: real space image filtering gray scale is handled: the red component that detects each pixel in the image of real space; Red component in the image is changed to 255 greater than the gray values of pixel points of green component and blue component sum; Rest of pixels point gray value is changed to 0, obtains the red filtering gray level image of real space image; Detect the blue component of each pixel in the image of real space; Image Smalt component is changed to 255 greater than the gray values of pixel points of green component and red component sum; Rest of pixels point gray value is changed to 0, obtains the blue color filtered gray level image of real space image; Detect the green component of each pixel in the image of real space; Green component in the image is changed to 255 greater than the gray values of pixel points of red component and blue component sum; Rest of pixels point gray value is changed to 0, obtains the green filtering gray level image of real space image;
Step 3.2: adopt mathematical morphology corrosion and expansion algorithm to obtain the edge of three width of cloth filtering gray level images of real space image respectively;
Step 3.3: detect the straightway in three width of cloth filtering gray level images of real space image with reference to " line detection algorithm of utilization Freeman criterion " of the grand proposition of still shaking respectively; And reservation length is not less than the straightway of 32 pixels; On every straightway, evenly get 6 pixels and simulate linear equation, obtain the straightway set of three width of cloth filtering gray level images: red filtering gray level image cathetus section set
Figure BDA00001786908600061
The set of green filtering gray level image cathetus section The set of blue color filtered gray level image cathetus section B = { L b k : y = k B k x + b k } ;
Step 3.4: set of computations
Figure BDA00001786908600064
In each straight line and set
Figure BDA00001786908600065
In the intersection point q of each straight line i, note intersection point q iAnd intersection point q iThe set of line correspondence does in R and G
Figure BDA00001786908600066
Set of computations
Figure BDA00001786908600067
In each straight line and set In the intersection point s of each straight line i, note intersection point s iAnd intersection point s iThe set of the line correspondence in R and B does
Figure BDA00001786908600069
Step 3.5: for any straight line among set M and the N
Figure BDA000017869086000610
Calculate
Figure BDA000017869086000611
Figure BDA000017869086000612
Middle corresponding some q iWith
Figure BDA000017869086000613
Figure BDA000017869086000614
Middle corresponding some s iBetween distance L i
Step 3.6: all straight lines among repeating step 3.5 traversal set M and the N
Figure BDA000017869086000615
Obtain L iGet minimum value min (L i) time three straight lines among corresponding set R, G and the B
Figure BDA00001786908600071
Be respectively three characteristic edge of red, green, blue of feature cube, the slope of three characteristic edge of red, green, blue corresponds to respectively
Figure BDA00001786908600072
Min (L i) corresponding intersection point q I minWith intersection point s I minBetween the mid point of line be the summit of three characteristic edge
Figure BDA00001786908600073
Step 4: the corresponding relation of the characteristic edge of feature cube threedimensional model confirms that respectively the coordinate and the viewpoint direction of observation of left viewpoint and right viewpoint in the model space are vectorial in the model space projected image that the characteristic edge of feature cube and step 1 are set up in the material picture that extracts according to step 3, may further comprise the steps:
Step 4.1: it is V that viewpoint position is set l(x l, y l, z l), and viewpoint direction of observation vector
Figure BDA00001786908600074
Obtaining the subpoint coordinate of viewpoint on the perspective plane is V ' l(x l+ 3.6u l), (y l+ 3.6v l), (z l+ 3.6)), and the perspective plane equation be:
u l[x-(x l+3.6·u l)]+v l[y-(y l+3.6·v l)]+[z-(z l+3.6)]=0 (4-1)
Step 4.2: any point P (x in the model space is set p, y p, z p) and vision point l(x l, y l, z l) the space line equation do
OP ′ → = OP → + t V l P → - - - ( 4 - 2 )
Wherein the O point is a model coordinate systems initial point in the model space; Utilize formula 4-1 and formula 4-2 simultaneous solution go out parametric t and P point and perspective plane intersection point P ' (x ' P, y ' P, z ' P), wherein P ' point coordinates is eye coordinates and the vectorial function of viewpoint direction of observation in the model space;
Step 4.3: on the perspective plane, set up image coordinate system, the image coordinate system initial point is perspective plane lower left corner point O ' (x o, y o, z o), the perspective plane laterally does
Figure BDA00001786908600076
Direction of principal axis vertically does
Figure BDA00001786908600077
Direction of principal axis,
Figure BDA00001786908600078
Axle is perpendicular to the perspective plane and point to the viewpoint direction of observation; Then the homogeneous coordinate transformation of model coordinate systems and image coordinate system relation is:
P″(x″ p,y″ p,z″ p,1)=Q·T·P′(x′ p,y′ p,z′ p,1) (4-3)
P " (x " wherein p, y " p, z " p) be P ' (x ' P, y ' P, z ' P) coordinate under image coordinate system, translation transition matrix T is:
T = 1 0 0 x o 0 1 0 y o 0 0 1 z o 0 0 0 1
Rotation transformation matrix Q is:
Q = u x u y u z 0 v x v y v z 0 n x n y n z 0 0 0 0 1
Element among the Q is a model coordinate systems The component of unit vector under image coordinate system of axle; Since the perspective plane all the time perpendicular to
Figure BDA00001786908600083
Axle is with P " (x " p, y " p, z " p) be expressed as plane coordinates P " (x " p, y " p);
Step 4.4: utilize formula 4-1, formula 4-2 to obtain the characteristic edge end points V of feature cube in the model space 0, V x, V y, V zSubpoint V ' on the perspective plane 0, V ' x, V y', V ' zUtilize formula 4-3 to obtain subpoint V ' 0, V ' x, V ' y, V ' zCoordinate V in image coordinate system " 0(x " 0, y " 0), V " x(x " x, y " x), V " y(x " y, y " y), V " z(x " z, y " z); Calculated line V " 0V " x, V " 0V " y, V " 0V " zSlope be respectively k r', k g', k ' b, k r', k g', k ' bFor being eye coordinates and the vectorial function of viewpoint direction of observation in the model space;
Step 4.5: according to formula
k r ′ = k R i min k g ′ = k G j min k b ′ = k B k min x 0 ′ ′ 9.7 = x V 0 640 y 0 ′ ′ 7.9 = y V 0 480
Calculate eye coordinates V in the model space l(x l, y l, z l) and viewpoint direction of observation vector
Figure BDA00001786908600085
And viewpoint direction of observation vector;
Step 5: foundation needs to merge the dummy object model in material picture in the model space; And the transformation relation between the model coordinate systems of the model coordinate systems of definite dummy object model and feature cube threedimensional model, and variation relation is expressed as translation matrix T ' and spin matrix Q ';
Step 6: left viewpoint and right eye coordinates and viewpoint direction of observation in the model space that obtains according to step 4, in OpenGL, the model space is carried out perspective projection transformation, obtain the left image and the right image of the model space;
Step 7: the left image of the model space that step 6 is obtained and right image respectively with the in kind left image and the right doubling of the image in kind of camera collection, and use the dummy object model that left image and the needs in the right image with the model space merge that left image of material object and right image counterpart in kind are covered; On the left eye display and right eye display that left image after merging and right image are presented at the eyeglass stereoscopic display respectively.
Further, through adjustment translation matrix T ' and spin matrix Q ', the dummy object model that realization will be merged moves in the image after fusion.
Further, in real-time procedure for displaying, whether the feature cube characteristic edge that detects in the material picture of arbitrary camera collection overlaps with feature cube characteristic edge in the previous frame material picture; If overlap, represent that then viewpoint is constant, the left image and the right image of the model space are constant; If do not overlap; Represent that then viewpoint changes,, recomputate eye coordinates and viewpoint direction of observation vector in the model space then according to step 3 and step 4.

Claims (1)

1. information physical image fusing device is characterized in that: comprise eyeglass stereoscopic display, binocular camera,
Image processing apparatus and feature cube;
Said eyeglass stereoscopic display comprises left eye display and right eye display;
Said binocular camera is two miniature CCD camera, and the resolution of each camera should be not less than 640*480, is used for anthropomorphic dummy's eyes images acquired;
Said camera is installed in the camera fixing device, and camera fixing device comprises rectangle support and spherical moving block; On rectangle support one end face trapezoidal slide block is arranged; Rectangle support has ball recess on the other end; Spherical moving block is installed in the ball recess; And spherical moving block is connected through rotating shaft with rectangle support, and the spherical moving block centre of sphere is crossed in rotating shaft, and shaft parallel is in the rectangle support end face and perpendicular to the glide direction of rectangle support upper trapezoid slide block; On spherical moving block direction outwardly, the camera installing hole is arranged, and camera installing hole central axis and spherical moving block central axis intersect vertically; Camera is fixed in the camera installing hole, and the central axis of camera and camera installing hole central axes;
Said camera fixing device is installed on the camera location-plate, and there is buckle at camera location-plate edge, is used for the camera location-plate is connected to the eyeglass stereoscopic display; The trapezoidal chute that cooperates with trapezoidal slide block is arranged on the camera location-plate lateral surface;
Said feature cube plays the image registration effect; The whole outer surface of feature cube is a solid color; Three of a summit seamed edges are respectively three kinds of colors of red, green, blue in the feature cube, and the color of three seamed edges is different from the whole color of feature cube; Said three characteristic edge that seamed edge is a feature cube;
The two-way picture signal of said binocular camera synchronous acquisition is converted into data image signal through image processing apparatus, and merges through the information physical image, exports the eyeglass stereoscopic display to, realizes that the broadcast of left and right sides pattern stereo-picture shows.
CN 201220289294 2012-06-19 2012-06-19 Cyber-physical image fusion device Expired - Fee Related CN202634612U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201220289294 CN202634612U (en) 2012-06-19 2012-06-19 Cyber-physical image fusion device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201220289294 CN202634612U (en) 2012-06-19 2012-06-19 Cyber-physical image fusion device

Publications (1)

Publication Number Publication Date
CN202634612U true CN202634612U (en) 2012-12-26

Family

ID=47387749

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201220289294 Expired - Fee Related CN202634612U (en) 2012-06-19 2012-06-19 Cyber-physical image fusion device

Country Status (1)

Country Link
CN (1) CN202634612U (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102801994A (en) * 2012-06-19 2012-11-28 西北工业大学 Physical image information fusion device and method
CN105487261A (en) * 2016-01-26 2016-04-13 京东方科技集团股份有限公司 Multi-angle shooting equipment for eyeglasses and eyeglasses containing multi-angle shooting equipment for eyeglasses

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102801994A (en) * 2012-06-19 2012-11-28 西北工业大学 Physical image information fusion device and method
CN102801994B (en) * 2012-06-19 2014-08-20 西北工业大学 Physical image information fusion device and method
CN105487261A (en) * 2016-01-26 2016-04-13 京东方科技集团股份有限公司 Multi-angle shooting equipment for eyeglasses and eyeglasses containing multi-angle shooting equipment for eyeglasses
US9948845B2 (en) 2016-01-26 2018-04-17 Boe Technology Group Co., Ltd. Device for multi-angle photographing in eyeglasses and eyeglasses including the device

Similar Documents

Publication Publication Date Title
CN102801994B (en) Physical image information fusion device and method
CN114004941B (en) Indoor scene three-dimensional reconstruction system and method based on nerve radiation field
CN109242954B (en) Multi-view three-dimensional human body reconstruction method based on template deformation
US8848035B2 (en) Device for generating three dimensional surface models of moving objects
CN104036488B (en) Binocular vision-based human body posture and action research method
CN104504671A (en) Method for generating virtual-real fusion image for stereo display
CN106504188B (en) Generation method and device for the eye-observation image that stereoscopic vision is presented
CN109584295A (en) The method, apparatus and system of automatic marking are carried out to target object in image
WO2012153447A1 (en) Image processing device, image processing method, program, and integrated circuit
CN102054276B (en) Camera calibration method and system for object three-dimensional geometrical reconstruction
CN101266546A (en) Method for accomplishing operating system three-dimensional display and three-dimensional operating system
CN104677330A (en) Small binocular stereoscopic vision ranging system
CN106254854A (en) The preparation method of 3-D view, Apparatus and system
CN104599317A (en) Mobile terminal and method for achieving 3D (three-dimensional) scanning modeling function
CN110337674A (en) Three-dimensional rebuilding method, device, equipment and storage medium
CN101808250A (en) Dual vision-based three-dimensional image synthesizing method and system
CN107066975B (en) Video identification and tracking system and its method based on depth transducer
CN101794459A (en) Seamless integration method of stereoscopic vision image and three-dimensional virtual object
CN202634612U (en) Cyber-physical image fusion device
CN113487726B (en) Motion capture system and method
CN116612256B (en) NeRF-based real-time remote three-dimensional live-action model browsing method
CN111881807A (en) VR conference control system and method based on face modeling and expression tracking
CN206331183U (en) A kind of augmented reality and the dual-purpose display device of virtual reality and wearable device
CN107274449B (en) Space positioning system and method for object by optical photo
CN105208372A (en) 3D landscape generation system and method with interaction measurable function and reality sense

Legal Events

Date Code Title Description
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20121226

Termination date: 20140619

EXPY Termination of patent right or utility model