CN103996181A - Large-vision-field image splicing system and method based on two bionic eyes - Google Patents

Large-vision-field image splicing system and method based on two bionic eyes Download PDF

Info

Publication number
CN103996181A
CN103996181A CN201410196649.5A CN201410196649A CN103996181A CN 103996181 A CN103996181 A CN 103996181A CN 201410196649 A CN201410196649 A CN 201410196649A CN 103996181 A CN103996181 A CN 103996181A
Authority
CN
China
Prior art keywords
image
node
devkit
carma
main control
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410196649.5A
Other languages
Chinese (zh)
Other versions
CN103996181B (en
Inventor
罗均
刘恒利
李恒宇
赵重阳
谢少荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jinghai Intelligent Equipment Co ltd
Original Assignee
University of Shanghai for Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Shanghai for Science and Technology filed Critical University of Shanghai for Science and Technology
Priority to CN201410196649.5A priority Critical patent/CN103996181B/en
Publication of CN103996181A publication Critical patent/CN103996181A/en
Application granted granted Critical
Publication of CN103996181B publication Critical patent/CN103996181B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)
  • Toys (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses a large-vision-field image splicing system and method based on two bionic eyes. The large-vision-field image splicing method based on the two bionic eyes comprises the steps that a high-definition camera is connected to a computer-mounted rapid processor SECOCARMADevKit; the computer-mounted rapid processor SECOCARMADevKit and a high-performance computer form a distribution-type computing network through the Zeroconf technology and a Robot operating System (ROS); after the high-definition camera takes high-definition images, the images are transmitted to the computer-mounted rapid processor SECOCARMADevKit through a USB interface, feature point extraction is carried out on the high-definition images, and then the high-performance computer carries out matching and splicing on the images. According to the large-vision-field image splicing method based on the two bionic eyes, the high-performance computer and an image rapid processor CARMADevKit form the distribution-type computing network, different nodes are built, feature point extraction is carried out through the computer-mounted embedded CARMADevKit, and then image splicing is finished on the mainframe. The large-vision-field image splicing system and method based on the two bionic eyes are mainly used for large-vision-field splicing of images, in particular for large-vision-field environment sensing of a mobile robot.

Description

A kind of large view field image splicing system and method based on bionical eyes
Technical field
The invention discloses a kind of large view field image splicing system and method based on bionical eyes, relate to robot vision, feature point extraction technology and CUDA parallel computation field.
Background technology
Robot vision is the most efficient means of perception surrounding environment, and Mobile Robotics Navigation is played to vital effect.Real-time, wide-field image information has very important significance for the navigation operation of ground mobile robot.
Image Mosaics refers to that the small-sized image with certain overlapping region from Same Scene synthesizes a width large scale by several, has image with great visual angle.Image Mosaics can be broken through the limitation of lens shooting angle, expands visual field, is widely used in remote sensing image processing, medical image analysis, cartography, computer vision, video monitoring, virtual reality, the fields such as super-resolution reconstruction and robot navigation.Because the data volume relating in Image Mosaics process is large, computation-intensive, therefore many serial processing methods are all difficult to requirement of real time in actual applications at present.
At present, the image algorithm of direct computing complexity on airborne platform, now airborne platform need to carry exclusive FPGA or DSP hardware, and algorithm is solidificated on relevant hardware, and the optimization of the line correlation of going forward side by side reduces economy; Or in the image servo of mobile robot's airborne platform, image information is processed to upper server machine by network delivery, high to the upper server requirement of processing.
The resolution great majority of doing image processing on airborne platform are 640 × 480, cannot process in real time for high-definition image.Because calculated amount is large, data are many, and the industrial control computer carrying on mobile robot's airborne platform cannot meet the calculating of Direct graphic method picture splicing.Image Mosaics process is mainly divided into Image Acquisition, images match and 3 steps of image co-registration, and wherein most crucial technology is exactly images match.The key of images match is accurately to determine the position of lap in two width images, thereby obtains the transformation relation of two images.Image Mosaics has the image matching algorithm based on unique point, and the extraction computation complexity of this process feature point is larger, can adopt GPU parallelization to accelerate.
In March, 2010, Willow Garage(http: //www.willowgarage.com/) the issue robot operating system Robot Operating System(ROS of company).Robot operating system ROS provides various storehouses and instrument helper applications developer development machines people application, comprises hardware abstraction layer, hardware driving, virtual instrument, message transmission, software package management.Simultaneously, associating tall and handsome reaching (NVIDIA) company of the SECO of embedded hardware solution supplier (rare section) company issues a research oriented personnel's embedded GPGPU Platform Solution products C ARMA DevKit, utilize video card parallel computing to improve the speed of computing, set up into portable high-performance calculation platform.
Summary of the invention
In order to overcome above-mentioned the deficiencies in the prior art, the invention provides a kind of large view field image splicing system and method based on bionical eyes, solve the problem that existing high definition realtime graphic splices processing speed slowly or needs exclusive hardware device.
In order to achieve the above object, design of the present invention is: the image that is first gathered high definition by image input system; Then image is sent to SECO CARMA DevKit embedded type C UDA hardware and software platform the high-definition image of Real-time Collection is carried out to real time characteristic points detection, then result of calculation is returned to host computer and carry out the last processing of Image Mosaics.
Large view field image splicing system based on bionical eyes of the present invention comprises:
(1) high-definition image input, ARTAM-1400MI-USB3 high-definition camera is passed on processor by USB interface;
(2) fast image processing system: by SECO CARMA DevKit embedded type C UDA hardware and software platform parallel computing, the high-definition image of Real-time Collection is carried out to special this point of realtime graphic and detect.
(3) host computer is subscribed to the result of calculation of SECO CARMA DevKit embedded type C UDA software and hardware parallel computing platform, carries out last images match and fusion.
According to foregoing invention design, the present invention adopts following technical proposals:
Based on a large view field image splicing system for bionical eyes, comprise described two high-definition cameras, it is characterized in that: two described high-definition cameras are connected respectively to corresponding image fast processor CARMA DevKit; Described image fast processor CARMA DevKit is connected to a switch by network; Described switch is connected to a main control computer and a dsp processor.
Based on a large view field image joining method for bionical eyes, adopt the above-mentioned large view field image splicing system based on bionical eyes to carry out Image Mosaics, it is characterized in that, splicing step is as follows:
Step is 1.: main control computer is set as to the main frame MASTER of robot operating system ROS, creates Image Acquisition node by CARMA DevKit, obtain high-definition image import in real time image fast processor CARMA DevKit into by USB interface by high-definition camera;
Step is 2.: utilize robot operating system ROS Distributed Calculation feature to create SURF algorithm node by CARMA DevKit obtained image is carried out to feature point extraction;
Step is 3.: the matched node that is published to main control computer by the data of utilizing the computing node of CARMA DevKit that unique point is described is carried out unique point optimum matching.
Step is 4.: the unique point best matching result that the matched node of main control computer is calculated is published to affine splicing computing node and carries out, after affine calculating, the data of left and right camera being spliced into piece image.
The described large view field image joining method based on bionical eyes, is characterized in that, the concrete steps of described step Image Acquisition are 1. as follows:
Step 1-1: utilize Zeroconf technology by main control computer and two CARMA DevKit interconnected be network node, then main control computer is made as to the MASTER of robot operating system ROS distributed operating system.
Step 1-2: by camera operation bag uvc_camera establishment camera opened nodes integrated in robot operating system ROS system, and view data is issued.
The described large view field image joining method based on bionical eyes, is characterized in that, the concrete steps that described step image characteristic point 2. detects are as follows:
Step 2-1: mounting robot operating system ROS in CARMA DevKit, and CARMA DevKit is connected to main control computer by Zeroconf.
Step 2-2: utilize OpenCV image integrated in ROS system to process storehouse, create distributed SURF algorithm node, obtained image is carried out to feature point extraction, result of calculation is published to the matched node of main frame.
The described large view field image joining method based on bionical eyes, is characterized in that, the concrete steps of described step Image Feature Point Matching are 3. as follows:
Step 3-1: mounting robot operating system ROS in main control computer, utilizes Zeroconf technology main control computer to be made as to the MASTER of the whole Distributed Calculation of robot operating system ROS.
Step 3-2: create optimum matching node, the unique point descriptor that the image characteristic point that subscription SURF algorithm node is issued extracts is also carried out unique point optimum matching.
The described large view field image joining method based on bionical eyes, is characterized in that, the concrete steps of the described step affine splicing of image are 4. as follows:
Step 4-1: create the affine splicing node of image at main control computer, the view data that the information of the unique point optimum matching that subscription optimum matching node is issued and subscription CARMA DevKit Image Acquisition node are issued.
Step 4-2: left and right two width images are found after homography matrix according to the information of unique point optimum matching, carry out being spliced into piece image after affined transformation, and carry out data fusion.
The present invention compared with prior art, has following apparent outstanding substantive distinguishing features and remarkable advantage: the present invention can detect rapidly the unique point in image to the high-definition image of input on airborne platform.So just do not need airborne exclusive image processing hardware and the hardware algorithm of optimization, and fast image characteristic point detection and image matching system are deployed on mobile robot platform.
Brief description of the drawings
Fig. 1 is the hardware block diagram of system of the present invention.
Fig. 2 is the method flow diagram of system of the present invention.
Fig. 3 is Image Mosaics design sketch.
Embodiment
Below in conjunction with accompanying drawing, the preferred embodiments of the present invention are clearly and completely described, obviously, described embodiment is only a part of embodiment of the present invention.
Embodiment mono-:
Referring to Fig. 1, this large view field image splicing system based on bionical eyes, comprise two high-definition cameras 1,2, it is characterized in that: two the image fast processor CARMA DevKit 3, associating tall and handsome reaching (NVIDIA) company of SECO Xi Ke company of 4(embedded hardware solution supplier that are connected respectively to correspondence issue a research oriented personnel's embedded GPGPU Platform Solution products C ARMA DevKit, utilize video card parallel computing to improve the speed of computing, set up into portable high-performance calculation platform); Two described image fast processor CARMA DevKit 3,4 are connected to a switch 5 by network; Described switch 5 is connected to a main control computer 6 and DSP 7 processors.Two high-definition cameras 1,2 are selected the ARTAM-1400MI-USB3 1,400 ten thousand USB3 industrial cameras of Japanese ARTRAY company; Two image fast processor CARMA DevKit 3,4 are produced by company of western section of the U.S. (SECO), are mainly used in embedded type C UDA and calculate; Switch 5 models are 8 mouthfuls of 100 m switches of general company's T P-LINK TL-SF1008+; Main control computer 6 models are HP 8470w mobile workstation, and DSP 7 processors are selected TMS320F2812 DSP2812 development board.
Embodiment bis-:
Referring to Fig. 1, Fig. 2 and Fig. 3, this large view field image joining method based on bionical eyes, adopts said system to carry out Image Mosaics, and splicing step is as follows:
Step is 1.: main control computer is set as to robot operating system ROS(2010 March, and Willow Garage company issues robot operating system Robot Operating System, is called for short ROS.Robot operating system ROS provides various storehouses and instrument helper applications developer development machines people application, comprise hardware abstraction layer, hardware driving, virtual instrument, message is transmitted, software package management) main frame MASTER, create Image Acquisition node by CARMA DevKit, obtain high-definition image by high-definition camera and import in real time image fast processor CARMA DevKit into by USB interface;
Step is 2.: utilize robot operating system ROS Distributed Calculation feature to create SURF algorithm node by CARMA DevKit obtained image is carried out to feature point extraction;
Step is 3.: the matched node that is published to main control computer by the data of utilizing the computing node of CARMA DevKit that unique point is described is carried out unique point optimum matching.
Step is 4.: the unique point best matching result that the matched node of main control computer is calculated is published to affine splicing computing node and carries out, after affine calculating, the data of left and right camera being spliced into piece image.
Embodiment tri-:
The present embodiment and embodiment bis-are basic identical, and special feature is, the concrete steps of described step Image Acquisition are 1. as follows:
Step 1-1: utilize Zeroconf technology by main control computer and two CARMA DevKit interconnected be network node, then main control computer is made as to the MASTER of robot operating system ROS distributed operating system.
Step 1-2: by camera operation bag uvc_camera establishment camera opened nodes integrated in robot operating system ROS system, and view data is issued.
The concrete steps that described step image characteristic point 2. detects are as follows:
Step 2-1: mounting robot operating system ROS in CARMA DevKit, and CARMA DevKit is connected to main control computer by Zeroconf.
Step 2-2: utilize OpenCV image integrated in ROS system to process storehouse, create distributed SURF algorithm node, obtained image is carried out to feature point extraction, result of calculation is published to the matched node of main frame.
The concrete steps of described step Image Feature Point Matching are 3. as follows:
Step 3-1: mounting robot operating system ROS in main control computer, utilizes Zeroconf technology main control computer to be made as to the MASTER of the whole Distributed Calculation of robot operating system ROS.
Step 3-2: create optimum matching node, the unique point descriptor that the image characteristic point that subscription SURF algorithm node is issued extracts is also carried out unique point optimum matching.
The concrete steps of the described step affine splicing of image are 4. as follows:
Step 4-1: create the affine splicing node of image at main control computer, the view data that the information of the unique point optimum matching that subscription optimum matching node is issued and subscription CARMA DevKit Image Acquisition node are issued.
Step 4-2: left and right two width images are found after homography matrix according to the information of unique point optimum matching, carry out being spliced into piece image after affined transformation, and carry out data fusion.
The above; be only the specific embodiment of the present invention, but protection scope of the present invention is not only confined to this, in the technical scope that any those of ordinary skill in the art disclose in the present invention; the variation that can expect easily and replacement, within all should being encompassed in protection scope of the present invention.Therefore, protection scope of the present invention should be described and is as the criterion with the protection domain of claim.

Claims (6)

1. the large view field image splicing system based on bionical eyes, comprise described two high-definition cameras (1,2), it is characterized in that: described two high-definition cameras (1,2) are connected respectively to corresponding image fast processor CARMA DevKit(3,4); Described image fast processor CARMA DevKit(3,4) be connected to a switch (5) by network; Described switch (5) is connected to a main control computer (6) and a DSP(7) processor.
2. the large view field image joining method based on bionical eyes, adopts the large view field image splicing system based on bionical eyes according to claim 1 to carry out Image Mosaics, it is characterized in that, splicing step is as follows:
Step is 1.: the main frame MASTER that main control computer (6) is set as to robot operating system ROS, by CARMA DevKit(3,4) create Image Acquisition node (12,13), obtain high-definition image by high-definition camera (1,2) and import in real time image fast processor CARMA DevKit(3,4 into by USB interface);
Step is 2.: by CARMA DevKit(3,4) utilize robot operating system ROS Distributed Calculation feature to create SURF algorithm node (8,9) obtained image is carried out to feature point extraction;
Step is 3.: by utilizing CARMA DevKit(3,4) computing node (8,9) data that unique point the is described matched node (10) that is published to main control computer (6) carry out unique point optimum matching;
Step is 4.: the unique point best matching result that the matched node (10) of main control computer (6) is calculated is published to affine splicing computing node (11) and carries out, after affine calculating, the data of left and right camera being spliced into piece image.
3. the large view field image joining method based on bionical eyes according to claim 2, is characterized in that, the concrete steps of described step Image Acquisition are 1. as follows:
Step 1-1: utilize Zeroconf technology by main control computer (6) and two CARMA DevKit(3,4) interconnected be network node, then main control computer (6) is made as to the MASTER of robot operating system ROS distributed operating system;
Step 1-2: by camera operation bag uvc_camera establishment camera opened nodes integrated in robot operating system ROS system, and view data is issued.
4. the large view field image joining method based on bionical eyes according to claim 2, is characterized in that, the concrete steps that described step image characteristic point 2. detects are as follows:
Step 2-1: at CARMA DevKit(3,4) in mounting robot operating system ROS, and by CARMA DevKit(3,4) be connected to main control computer (6) by Zeroconf;
Step 2-2: utilize OpenCV image integrated in ROS system to process storehouse, create distributed SURF algorithm node (8,9), obtained image is carried out to feature point extraction, result of calculation is published to the matched node (10) of main frame.
5. the large view field image joining method based on bionical eyes according to claim 2, is characterized in that, the concrete steps of described step Image Feature Point Matching are 3. as follows:
Step 3-1: mounting robot operating system ROS in main control computer (6), utilizes Zeroconf technology main control computer (6) to be made as to the MASTER of the whole Distributed Calculation of robot operating system ROS;
Step 3-2: create optimum matching node (10), the unique point descriptor that the image characteristic point that subscription SURF algorithm node (8,9) is issued extracts is also carried out unique point optimum matching.
6. the large view field image joining method based on bionical eyes according to claim 2, is characterized in that, the concrete steps of the described step affine splicing of image are 4. as follows:
Step 4-1: create image affine splicing node at main control computer (6), subscribe to the information of the unique point optimum matching that optimum matching node (10) issues and subscribe to CARMA DevKit(3,4) view data issued of Image Acquisition node (12,13);
Step 4-2: left and right two width images are found after homography matrix according to the information of unique point optimum matching, carry out being spliced into piece image after affined transformation, and carry out data fusion.
CN201410196649.5A 2014-05-12 2014-05-12 A kind of big view field image splicing system and method based on bionical eyes Active CN103996181B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410196649.5A CN103996181B (en) 2014-05-12 2014-05-12 A kind of big view field image splicing system and method based on bionical eyes

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410196649.5A CN103996181B (en) 2014-05-12 2014-05-12 A kind of big view field image splicing system and method based on bionical eyes

Publications (2)

Publication Number Publication Date
CN103996181A true CN103996181A (en) 2014-08-20
CN103996181B CN103996181B (en) 2017-06-23

Family

ID=51310338

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410196649.5A Active CN103996181B (en) 2014-05-12 2014-05-12 A kind of big view field image splicing system and method based on bionical eyes

Country Status (1)

Country Link
CN (1) CN103996181B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105215998A (en) * 2015-08-18 2016-01-06 长安大学 The multi-vision visual platform of a kind of imitative spider
US20160307027A1 (en) * 2015-01-12 2016-10-20 Yutou Technology (Hangzhou) Co., Ltd. A system and a method for image recognition
CN106506464A (en) * 2016-10-17 2017-03-15 武汉秀宝软件有限公司 A kind of toy exchange method and system based on augmented reality
CN106989730A (en) * 2017-04-27 2017-07-28 上海大学 A kind of system and method that diving under water device control is carried out based on binocular flake panoramic vision
CN110516531A (en) * 2019-07-11 2019-11-29 广东工业大学 A kind of recognition methods of the dangerous mark based on template matching

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1490594A (en) * 2003-08-22 2004-04-21 湖南大学 Multiple free degree artificial threedimensional binocular vision apparatus
CN1932841A (en) * 2005-10-28 2007-03-21 南京航空航天大学 Petoscope based on bionic oculus and method thereof
CN101316210A (en) * 2008-06-26 2008-12-03 河海大学 Multi-cam biomechanism-imitated intelligent perception broadband wireless mesh network
US20110069149A1 (en) * 2006-09-04 2011-03-24 Samsung Electronics Co., Ltd. Method for taking panorama mosaic photograph with a portable terminal
CN202512133U (en) * 2012-03-29 2012-10-31 河海大学 Large-scale particle image velocity measuring system based on double-camera visual field splice
EP2332011B1 (en) * 2008-08-18 2012-12-26 Holakovszky, Làszló Device for displaying panorama
CN103473532A (en) * 2013-09-06 2013-12-25 上海大学 Pedestrian fast detection system and method of vehicle-mounted platform

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1490594A (en) * 2003-08-22 2004-04-21 湖南大学 Multiple free degree artificial threedimensional binocular vision apparatus
CN1932841A (en) * 2005-10-28 2007-03-21 南京航空航天大学 Petoscope based on bionic oculus and method thereof
US20110069149A1 (en) * 2006-09-04 2011-03-24 Samsung Electronics Co., Ltd. Method for taking panorama mosaic photograph with a portable terminal
CN101316210A (en) * 2008-06-26 2008-12-03 河海大学 Multi-cam biomechanism-imitated intelligent perception broadband wireless mesh network
EP2332011B1 (en) * 2008-08-18 2012-12-26 Holakovszky, Làszló Device for displaying panorama
CN202512133U (en) * 2012-03-29 2012-10-31 河海大学 Large-scale particle image velocity measuring system based on double-camera visual field splice
CN103473532A (en) * 2013-09-06 2013-12-25 上海大学 Pedestrian fast detection system and method of vehicle-mounted platform

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
KRISHNASAMY R 等: "High precision target tracking with a compound-eye image sensor", 《ELECTRICAL AND COMPUTER ENGINEERING》 *
刘治湘 等: "基于仿生双目机械云台的图像跟踪技术研究", 《机械工程师》 *
蔡梦颖: "仿生复眼视觉***标定和大视场图像拼接的技术研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
陈金波 等: "基于仿生机械云台的声纳图像拼接", 《应用科学学报》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160307027A1 (en) * 2015-01-12 2016-10-20 Yutou Technology (Hangzhou) Co., Ltd. A system and a method for image recognition
US9875391B2 (en) * 2015-01-12 2018-01-23 Yutou Technology (Hangzhou) Co., Ltd. System and a method for image recognition
CN105215998A (en) * 2015-08-18 2016-01-06 长安大学 The multi-vision visual platform of a kind of imitative spider
CN106506464A (en) * 2016-10-17 2017-03-15 武汉秀宝软件有限公司 A kind of toy exchange method and system based on augmented reality
CN106506464B (en) * 2016-10-17 2019-11-12 武汉秀宝软件有限公司 A kind of toy exchange method and system based on augmented reality
CN106989730A (en) * 2017-04-27 2017-07-28 上海大学 A kind of system and method that diving under water device control is carried out based on binocular flake panoramic vision
CN110516531A (en) * 2019-07-11 2019-11-29 广东工业大学 A kind of recognition methods of the dangerous mark based on template matching
CN110516531B (en) * 2019-07-11 2023-04-11 广东工业大学 Identification method of dangerous goods mark based on template matching

Also Published As

Publication number Publication date
CN103996181B (en) 2017-06-23

Similar Documents

Publication Publication Date Title
US11762475B2 (en) AR scenario-based gesture interaction method, storage medium, and communication terminal
Li et al. Delving into the devils of bird's-eye-view perception: A review, evaluation and recipe
CN103996181A (en) Large-vision-field image splicing system and method based on two bionic eyes
AU716654B2 (en) Data processing system and method
CN100487724C (en) Quick target identification and positioning system and method
KR102358543B1 (en) Authoring of augmented reality experiences using augmented reality and virtual reality
CN104834379A (en) Repair guide system based on AR (augmented reality) technology
US20220188346A1 (en) Method, apparatus, and computer program product for training a signature encoding module and a query processing module to identify objects of interest within an image utilizing digital signatures
CN111666876A (en) Method and device for detecting obstacle, electronic equipment and road side equipment
JP2024528419A (en) Method and apparatus for updating an object detection model
Bhattacharya et al. A method for real-time generation of augmented reality work instructions via expert movements
Liu et al. Depth estimation of traffic scenes from image sequence using deep learning
CN112862973A (en) Real-time remote training method and system based on fault site
CN209746614U (en) Simulation interaction visualization system of virtual robot workstation
Türkmen Scene understanding through semantic image segmentation in augmented reality
Martinez et al. 3D reconstruction with a markerless tracking method of flexible and modular molecular physical models: towards tangible interfaces
Yang et al. Research on Satellite Cable Laying and Assembly Guidance Technology Based on Augmented Reality
KR20210038451A (en) Verification method and device for modeling route, unmanned vehicle, and storage medium
Wei et al. Real-time 3D human motion capture from binocular stereo vision and its use on human body animation
Aliakbarpour et al. PhD forum: Volumetric 3D reconstruction without planar ground assumption
CN111754543A (en) Image processing method, device and system
CN104036477A (en) Large-view-field image splicing device and method based on two biomimetic eyes
CN109687340A (en) The transmission line of electricity inspection system that full-view image is merged with virtual reality
Wang Realtime 3D Reconstruction with Mobile Devices
Zhang et al. Interaction between virtual scene based on Kinect and Unity3D

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20221129

Address after: 200444 Room 190, Building A, 5/F, Building 1, No. 1000 Zhenchen Road, Baoshan District, Shanghai

Patentee after: Jinghai Intelligent Equipment Co.,Ltd.

Address before: 200444 No. 99, upper road, Shanghai, Baoshan District

Patentee before: Shanghai University

TR01 Transfer of patent right