CN111541888A - AR implementation method based on display surface - Google Patents

AR implementation method based on display surface Download PDF

Info

Publication number
CN111541888A
CN111541888A CN202010376140.4A CN202010376140A CN111541888A CN 111541888 A CN111541888 A CN 111541888A CN 202010376140 A CN202010376140 A CN 202010376140A CN 111541888 A CN111541888 A CN 111541888A
Authority
CN
China
Prior art keywords
positioning
display surface
identification
enhancement
coordinate system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010376140.4A
Other languages
Chinese (zh)
Inventor
陈利民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Yueqian Technology Co ltd
Original Assignee
Qingdao Yueqian Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Yueqian Technology Co ltd filed Critical Qingdao Yueqian Technology Co ltd
Priority to CN202010376140.4A priority Critical patent/CN111541888A/en
Publication of CN111541888A publication Critical patent/CN111541888A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/366Image reproducers using viewer tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/302Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/366Image reproducers using viewer tracking
    • H04N13/383Image reproducers using viewer tracking for tracking with gaze detection, i.e. detecting the lines of sight of the viewer's eyes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Controls And Circuits For Display Device (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses an AR implementation method based on a display surface, which comprises an identification positioning module, an AR synthesis operation module and the display surface. The identification positioning module can be used for positioning in a visual identification manner, a radio positioning manner and an optical positioning manner. The AR synthesis operation module establishes a virtual space reference coordinate system based on a real space and a display arrangement mode. After the human eyes are positioned, the target enhancement object is positioned again and corresponding AR enhancement information is obtained. And substituting AR enhancement information into the virtual space reference coordinate system, performing information enhancement on the target enhancement object, and drawing the virtual image on the coordinates. And after the position of the human eyes in the real space is positioned, carrying out coordinate conversion on the virtual image according to coordinate operation and displaying the virtual image on a display surface. According to the method provided by the invention, a scheme can be realized at low cost only by using a general display device and a positioning device such as a camera. The method can realize AR interaction of large-scale equipment by using small-sized terminals such as mobile phones and the like and by using lower cost, can be widely applied to various fields, and provides more possibilities for innovative application and development of AR realization.

Description

AR implementation method based on display surface
Technical Field
The invention relates to the field of information enhancement in real space display, in particular to an AR implementation method based on a display surface.
Background
In recent years, along with the development of AI image recognition technology, new opportunities, namely new feasibility, of cost reduction and increase of computation amount are brought to AR implementation. In the traditional display mode, the naked eye 3D display of a special display screen is realized, so that the process is very complex, and the cost is higher. The mode of capturing by a camera and then directly displaying through a display screen usually lacks the sense of stereoscopy. By means of the VR display, the user can be isolated from the reality, which is inconvenient to improve the interaction capability.
Disclosure of Invention
In order to solve the above problems, the technical scheme provided by the invention is as follows:
a display surface-based AR implementation method comprises the following steps: the device comprises an identification positioning module, an AR synthesis operation module and a display surface.
The identification positioning module comprises human eye positioning and object positioning.
The identification and positioning module can be positioned in a visual identification and positioning mode, a radio positioning mode and an optical positioning mode.
Typically, the visual identification positioning can be identified by using AI visual aids, and the human eye can identify by using glasses with marking functions.
The AI visual aided recognition can be selected from independent recognition and array image recognition.
The optical positioning may be positioned using optical markings plus a laser grid.
The selection of the display surface can be divided into a screen type and a screen appearance.
Wherein, in the screen type classification, the display surface can select a mirror screen, a projector, a traditional display screen or other display equipment.
In the screen appearance classification, the display surface can be a flat screen, a cambered surface screen or a spliced screen.
When the display surface is a mirror surface screen, mirror image space coordinate operation is required to be added; when the display surface is an arc-shaped screen, affine projection coordinate operation is additionally used; when the display surface is holographic projection, a plurality of projection matrix operations are added according to the projection device.
And the AR synthesis operation module establishes a virtual space reference coordinate system based on the real space and the display arrangement mode.
And at least obtaining the space coordinates of human eyes and the target enhancement object according to the reference coordinate system of the real space.
And substituting the space coordinates of the human eyes and the target enhancement object into a virtual space reference coordinate system.
The method comprises the steps of obtaining AR enhancement information according to the needs of a user, placing the AR enhancement information on the space coordinates of a target enhancement object in a virtual space reference coordinate system according to the space coordinates of the target enhancement object in the virtual space reference coordinate system, completing information enhancement of the target enhancement object, and drawing a virtual image on the coordinates.
And obtaining the space coordinates of the human eyes in the reference coordinate system of the real space.
And performing coordinate conversion on the coordinates with the virtual image according to spatial coordinate operation corresponding to the selection of the display surface, and finally displaying the coordinates on the human eye observation position of the display surface according to a projection matrix algorithm.
In summary, according to the display surface-based AR implementation method proposed by the present invention, a solution can be implemented at low cost only by using a general-purpose display device and a positioning device such as a camera. The method can realize AR interaction of large-scale equipment by using small-sized terminals such as mobile phones and the like and by using lower cost, can be widely applied to various fields, and provides more possibilities for innovative application and development of AR realization.
Drawings
FIG. 1 is a diagram of an independent identification method and calculation process;
FIG. 2 illustrates an array image recognition method and calculation process;
fig. 3 shows an AR composition operation module using a mirror screen with a dual-camera AI vision assistance.
Detailed Description
The technical solution of the present invention will be described in further detail with reference to specific embodiments.
The invention provides an AR implementation method based on a display surface, which comprises the following steps: the device comprises an identification positioning module, an AR synthesis operation module and a display surface.
The identification positioning module comprises human eye positioning and object positioning.
The identification and positioning module can be positioned in a visual identification and positioning mode, a radio positioning mode and an optical positioning mode.
Typically, the visually-recognized localization may be recognized using AI visual aids. The human eye can use the glasses with the marking function to assist the identification.
The AI visual aided recognition can be selected from independent recognition and array image recognition.
The independent identification technology is simple, low in cost and suitable for general occasions; the array image recognition has high capturing precision, but needs special training and is only suitable for special occasions.
As shown in fig. 1, when the positioning is performed by using the independent recognition method, the camera may capture an image, and output a plane coordinate after the AI vision-assisted recognition, wherein the plane coordinate of the human eye and the plane coordinate of the target object are included, and finally, the space coordinates of the human eye and the target object are restored by the linear regression equation.
As shown in fig. 2, when positioning is performed by using an array image recognition method, a camera array may capture images, combine multiple images into a composite image with a high feature value, perform AI vision-assisted recognition, and finally restore spatial coordinates of human eyes and a target object.
The optical positioning may be positioned using optical markings plus a laser grid.
The selection of the display surface may be classified into a screen type and a screen appearance.
Wherein, in the screen type classification, the display surface can select a mirror screen, a projector, a traditional display screen or other display equipment.
In the screen appearance classification, the display surface can be a flat screen, a cambered surface screen or a spliced screen.
When the display surface is a mirror surface screen, mirror image coordinate operation is required to be added; when the display surface is an arc-shaped screen, affine projection coordinate operation is additionally used; when the display surface is holographic projection, a plurality of projection matrix operations are added according to the projection device.
As shown in fig. 3, the AR synthesis operation module uses a dual-camera AI vision-assisted mode to exemplify the selection of the mirror screen on the display surface.
And establishing a virtual space reference coordinate system based on the real space and the display arrangement mode. According to the AI visual auxiliary recognition and the mirror screen selected in the example, a real space reference coordinate system is established according to the real shooting ranges of the two cameras, and then a mirror space reference coordinate system is established based on the mirror screen. At this time, the two reference coordinates are mirror images.
After the user enters the shooting ranges of the two cameras, the cameras shoot two pictures of the human body to be identified at different angles according to the difference of the placing angles. And after AI vision auxiliary identification, at least obtaining the plane coordinates of human eyes and the target enhancement object according to the reference coordinate system of the real space. And restoring the plane coordinates to the space coordinates of human eyes and the target enhancement object through a linear regression equation based on the real space reference coordinate system.
And substituting the space coordinates of the human eyes and the target enhancement object into a mirror image space reference coordinate system.
The method comprises the steps of obtaining AR enhancement information according to the needs of a user, placing the AR enhancement information on the space coordinates of a target enhancement object in a mirror image space reference coordinate system according to the space coordinates of the target enhancement object in the mirror image space reference coordinate system, completing information enhancement of the target enhancement object, and drawing a virtual image on the coordinates. At this time, the mirror space reference coordinate system at least includes spatial coordinates of the user's eyes and the target enhancement object, and spatial coordinates of the AR enhancement information.
And obtaining the space coordinates of the human eyes in the reference coordinate system of the real space.
And performing coordinate conversion on the coordinates with the virtual images according to mirror space coordinate operation. And finally displaying the image on the human eye observation position of the mirror screen according to a projection matrix algorithm.
The above description is only for the preferred embodiment of the present invention and is not intended to limit the scope of the present invention, and all equivalent structural changes made by using the contents of the present specification and the drawings can be directly or indirectly applied to other related technical fields and are included in the scope of the present invention.

Claims (7)

1. A display surface-based AR implementation method is characterized by comprising the following steps: the system comprises an identification positioning module, an AR synthesis operation module and a display surface;
the identification positioning module comprises human eye positioning and target object positioning.
2. The method of claim 1, further comprising:
the identification positioning module can be used for positioning in a visual identification manner, a radio positioning manner and an optical positioning manner.
3. The method of claim 2, further comprising:
the visual identification positioning may be identified using AI visual assistance.
4. The method of claim 2, further comprising:
the AI vision-aided recognition can select two methods of independent recognition and array image recognition.
5. The method of claim 4, further comprising:
the human eye positioning can use glasses with marking functions for AI vision auxiliary recognition.
6. The method of claim 1, further comprising:
a virtual space reference coordinate system needs to be established based on the real space and the display arrangement.
7. The method of claim 1, further comprising:
and substituting AR enhancement information into the virtual space reference coordinate system, performing information enhancement on the target object, and drawing the virtual image on the coordinates.
CN202010376140.4A 2020-05-07 2020-05-07 AR implementation method based on display surface Pending CN111541888A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010376140.4A CN111541888A (en) 2020-05-07 2020-05-07 AR implementation method based on display surface

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010376140.4A CN111541888A (en) 2020-05-07 2020-05-07 AR implementation method based on display surface

Publications (1)

Publication Number Publication Date
CN111541888A true CN111541888A (en) 2020-08-14

Family

ID=71977423

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010376140.4A Pending CN111541888A (en) 2020-05-07 2020-05-07 AR implementation method based on display surface

Country Status (1)

Country Link
CN (1) CN111541888A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112631487A (en) * 2020-12-18 2021-04-09 咪咕文化科技有限公司 Image processing method, electronic device, and readable storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101833896A (en) * 2010-04-23 2010-09-15 西安电子科技大学 Geographic information guide method and system based on augment reality
US20140160012A1 (en) * 2012-12-11 2014-06-12 Automotive Research & Test Center Automatic correction device of vehicle display system and method thereof
CN108153502A (en) * 2017-12-22 2018-06-12 长江勘测规划设计研究有限责任公司 Hand-held augmented reality display methods and device based on transparent screen
CN108431730A (en) * 2015-12-24 2018-08-21 荷兰联合利华有限公司 Enhanced mirror
CN110203140A (en) * 2019-06-28 2019-09-06 威马智慧出行科技(上海)有限公司 Automobile augmented reality display methods, electronic equipment, system and automobile
CN110825234A (en) * 2019-11-11 2020-02-21 江南大学 Projection type augmented reality tracking display method and system for industrial scene
CN110869901A (en) * 2017-05-08 2020-03-06 Lg电子株式会社 User interface device for vehicle and vehicle

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101833896A (en) * 2010-04-23 2010-09-15 西安电子科技大学 Geographic information guide method and system based on augment reality
US20140160012A1 (en) * 2012-12-11 2014-06-12 Automotive Research & Test Center Automatic correction device of vehicle display system and method thereof
CN108431730A (en) * 2015-12-24 2018-08-21 荷兰联合利华有限公司 Enhanced mirror
CN110869901A (en) * 2017-05-08 2020-03-06 Lg电子株式会社 User interface device for vehicle and vehicle
CN108153502A (en) * 2017-12-22 2018-06-12 长江勘测规划设计研究有限责任公司 Hand-held augmented reality display methods and device based on transparent screen
CN110203140A (en) * 2019-06-28 2019-09-06 威马智慧出行科技(上海)有限公司 Automobile augmented reality display methods, electronic equipment, system and automobile
CN110825234A (en) * 2019-11-11 2020-02-21 江南大学 Projection type augmented reality tracking display method and system for industrial scene

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112631487A (en) * 2020-12-18 2021-04-09 咪咕文化科技有限公司 Image processing method, electronic device, and readable storage medium

Similar Documents

Publication Publication Date Title
CN109584295B (en) Method, device and system for automatically labeling target object in image
CN101566875B (en) Image processing apparatus, and image processing method
CN108304075B (en) Method and device for performing man-machine interaction on augmented reality device
CN106101689B (en) The method that using mobile phone monocular cam virtual reality glasses are carried out with augmented reality
CN109801379B (en) Universal augmented reality glasses and calibration method thereof
KR102079097B1 (en) Device and method for implementing augmented reality using transparent display
CN108388341B (en) Man-machine interaction system and device based on infrared camera-visible light projector
Andersen et al. Virtual annotations of the surgical field through an augmented reality transparent display
CN103136744A (en) Apparatus and method for calculating three dimensional (3D) positions of feature points
CN112954292B (en) Digital museum navigation system and method based on augmented reality
CN111209811B (en) Method and system for detecting eyeball attention position in real time
CN104599317A (en) Mobile terminal and method for achieving 3D (three-dimensional) scanning modeling function
CN113628322B (en) Image processing, AR display and live broadcast method, device and storage medium
CN104731338B (en) One kind is based on enclosed enhancing virtual reality system and method
CN108430032B (en) Method and equipment for realizing position sharing of VR/AR equipment
JP6420605B2 (en) Image processing device
CN114640833B (en) Projection picture adjusting method, device, electronic equipment and storage medium
CN109035307A (en) Setting regions target tracking method and system based on natural light binocular vision
CN206378680U (en) 3D cameras based on 360 degree of spacescans of structure light multimode and positioning
WO2023280082A1 (en) Handle inside-out visual six-degree-of-freedom positioning method and system
CN114882106A (en) Pose determination method and device, equipment and medium
McIlroy et al. Kinectrack: 3d pose estimation using a projected dense dot pattern
CN111541888A (en) AR implementation method based on display surface
CN109688400A (en) Electronic equipment and mobile platform
CN114616586A (en) Image annotation method and device, electronic equipment and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20200814