WO2019003359A1 - Wearable terminal display system, wearable terminal display method, and program - Google Patents

Wearable terminal display system, wearable terminal display method, and program Download PDF

Info

Publication number
WO2019003359A1
WO2019003359A1 PCT/JP2017/023814 JP2017023814W WO2019003359A1 WO 2019003359 A1 WO2019003359 A1 WO 2019003359A1 JP 2017023814 W JP2017023814 W JP 2017023814W WO 2019003359 A1 WO2019003359 A1 WO 2019003359A1
Authority
WO
WIPO (PCT)
Prior art keywords
wearable terminal
instruction manual
display
explanation
image
Prior art date
Application number
PCT/JP2017/023814
Other languages
French (fr)
Japanese (ja)
Inventor
俊二 菅谷
Original Assignee
株式会社オプティム
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社オプティム filed Critical 株式会社オプティム
Priority to PCT/JP2017/023814 priority Critical patent/WO2019003359A1/en
Publication of WO2019003359A1 publication Critical patent/WO2019003359A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer

Definitions

  • the present invention relates to a wearable terminal display system, a wearable terminal display method, and a program for displaying an instruction manual of the collected explanation objects as an augmented reality for an explanation object seen through the display plate on a display plate of a wearable terminal.
  • a wearable terminal display system a wearable terminal display method, and a program for displaying an instruction manual of the collected explanation objects as an augmented reality for an explanation object seen through the display plate on a display plate of a wearable terminal.
  • Patent Document 1 an apparatus for realizing a display control apparatus capable of presenting an electronic instruction manual of the own apparatus in a more sophisticated form than in the past according to the state of the own apparatus.
  • Patent Document 1 has a problem that it can not display instruction manuals of various objects other than its own device.
  • the present invention identifies a target to be described from an image of a field of view of the wearable terminal, and displays an instruction manual collected according to the target on a display plate of the wearable terminal as augmented reality.
  • An object of the present invention is to provide a wearable terminal display method and program.
  • the present invention provides the following solutions.
  • the invention according to the first aspect is a wearable terminal display system for displaying an instruction manual of an explanation object on a display board of the wearable terminal, and an image acquisition means for acquiring an image of the explanation object within the field of view of the wearable terminal. And the identification means for analyzing the image to identify the explanation object, the acquisition means for collecting the instruction manual for the explanation object, and the display plate of the wearable terminal to be seen through the display plate
  • a wearable terminal display system comprising: instruction manual display means for displaying the instruction manual as augmented reality for the object of explanation.
  • the invention according to the first aspect is a wearable terminal display method for displaying an instruction manual for an explanation object on a display board of the wearable terminal, wherein an image acquisition step for acquiring an image of the explanation object within the field of view of the wearable terminal The image analysis of the image to identify the explanation target, the collection step collecting the instruction manual for the explanation target, and the display plate of the wearable terminal viewed through the display plate And a instruction manual display step of displaying the instruction manual as augmented reality for the object of explanation.
  • the invention comprises a computer, an image acquisition step of acquiring an image of an explanation object within the field of view of the wearable terminal, an image analysis of the image to specify the explanation object, and An instruction manual display for displaying the instruction manual as the augmented reality for the explanation object which is seen through the display plate on the display plate of the wearable terminal, and a collection step for collecting the instruction manual for the explanation object Provide a program for making the steps.
  • FIG. 1 is a schematic view of a wearable terminal display system.
  • FIG. 2 is an example which collected and displayed the instruction manual of description object on the display board of a wearable terminal.
  • the wearable terminal display system is a system for displaying the collected instruction manual as an augmented reality on the display target of the wearable terminal with respect to an explanatory object which is seen through the display board.
  • a wearable terminal is a terminal with a view such as a smart glass or a head mounted display.
  • FIG. 1 is a schematic view of a wearable terminal display system according to a preferred embodiment of the present invention.
  • the wearable terminal display system includes an image acquisition unit, an identification unit, a collection unit, an instruction manual display unit, which is realized by the control unit reading a predetermined program.
  • determination means, change means, detection means, action result display means, position direction acquisition means, estimation means, guideline display means, selection acceptance means may be provided. These may be application based, cloud based or otherwise.
  • Each means described above may be realized by a single computer or may be realized by two or more computers (for example, in the case of a server and a terminal).
  • An image acquisition means acquires the image of the description object which entered into the view of a wearable terminal.
  • An image captured by the camera of the wearable terminal may be acquired. Or even if it is except a wearable terminal, if such an image can be acquired, it does not matter.
  • the image may be a moving image or a still image. In order to display the instruction manual in real time, a real time image is preferable.
  • the identification means analyzes the image to identify the explanation object. For example, it is specified whether the description target is the dehumidifier X of company A, the medical instrument Y of company B, the game Z of company C, and the like.
  • the subject of explanation is not limited to these.
  • the target of explanation can be specified from color, shape, size, characters, marks, and the like.
  • only the explanation objects in the center of the view of the wearable terminal may be specified. By identifying only the explanatory object in the center of the field of view, the time required for identification can be significantly reduced.
  • Machine learning may improve the accuracy of image analysis. For example, machine learning is performed using a past image to be explained as teacher data.
  • the collecting means collects an instruction manual according to the subject of explanation. You may collect the instruction manual according to the description target with reference to the database with which the instruction manual was registered previously. Also, the instruction manual may be collected by accessing Web content linked in advance to the explanation object. For example, it can be collected from Web content by assigning a URL or the like that links an explanation object to an instruction manual. In addition, the instruction manual may be collected from the Web content searched by searching the explanation target via the Internet. For example, since there is a case where the instruction manual is posted on the website to be explained, it can be collected from the Internet search. Or, there are cases where you can collect instruction manuals from social networking service (SNS) or word-of-mouth sites.
  • SNS social networking service
  • the instruction manual display means displays the instruction manual as the augmented reality on the display object of the wearable terminal for the explanation object which is seen through the display substrate.
  • an instruction manual drawn by a broken line is displayed as an augmented reality on an explanatory object drawn by a solid line seen through the display plate on a display board of the wearable terminal.
  • solid lines are real and broken lines are augmented reality.
  • the instruction manual displayed as augmented reality may be displayed so as to overlap with the explanation object seen through the display plate, but since the explanation object becomes difficult to see, the display manual ON / OFF can be switched. May be
  • the determination means determines whether or not the displayed instruction manual has been viewed. It may be determined whether the instruction manual has been browsed by acquiring the image being browsed and analyzing the image. Further, it may be determined whether the instruction manual has been browsed from the sensor information of the wearable terminal, the sensor information worn by the viewer, or the like. For example, a sensor that detects a sight line, a motion sensor, an acceleration sensor, and the like.
  • the change means changes the instruction manual to viewed if it is determined to have been viewed, and changes the degree of attention so that the instruction manual is to be viewed if it is determined to not be viewed. By doing this, it is possible to visually grasp which instruction manual has been viewed or not.
  • the document may be viewed by putting a check in the check box of the instruction manual.
  • the instruction manual may be viewed by pressing a stamp.
  • the attention level may be changed by changing the color and size of the instruction manual or by pressing a stamp so that the instruction manual is noticeable.
  • the detection means detects an action on the displayed instruction manual.
  • the action is, for example, a gesture, hand movement, gaze movement, or the like.
  • the action on the instruction manual may be detected from the sensor information of the wearable terminal, the sensor information attached to the viewer, or the like.
  • a sensor that detects a sight line, a motion sensor, an acceleration sensor, and the like may be detected from the sensor information of the wearable terminal, the sensor information attached to the viewer, or the like. For example, a sensor that detects a sight line, a motion sensor, an acceleration sensor, and the like.
  • the action result display means displays the result according to the action as the augmented reality on the display object of the wearable terminal for the explanation object seen through the display device.
  • the display of the instruction manual may be turned off.
  • the link may be opened upon detecting an action for opening the link attached to the instruction manual.
  • the page may be turned.
  • other actions may be used.
  • the position / direction unit acquires the terminal position and the imaging direction of the wearable terminal.
  • the terminal position can be acquired from the GPS (Global Positioning System) of the wearable terminal.
  • the imaging direction can be acquired from the geomagnetic sensor or the acceleration sensor of the wearable terminal when imaging is performed by the wearable terminal. You may acquire from other than these.
  • the estimation unit estimates an explanation target position of the explanation target based on the terminal position and the imaging direction. If the terminal position and the imaging direction are known, the explanation object position of the imaged explanation object can be estimated.
  • the specification means may specify the explanation object from the explanation object position and the image analysis. Specific accuracy can be improved by using location information. For example, if it is possible to improve the accuracy of identifying the dehumidifier X manufactured by company A in 2016 by the position information, the reliability of the instruction manual to be displayed correspondingly is also improved.
  • the guideline display means displays a guideline for imaging an explanation object as an augmented reality on the display board of the wearable terminal.
  • guidelines such as a frame or a cross may be displayed. It will be easier to analyze the image by having the image taken according to the guidelines.
  • the acquisition unit may acquire an image captured along a guideline.
  • the explanation target can be efficiently specified.
  • the selection receiving unit receives a selection of a selection target for an explanation target which is seen through the display plate of the wearable terminal.
  • the selection of the selection target may be accepted by looking at the explanation target which is seen through the display plate of the wearable terminal for a predetermined time.
  • the user may touch an explanation object seen through the display plate of the wearable terminal to accept the selection of the selection object.
  • the selection of the selection target may be received by placing the cursor on the explanation target which is seen through the display plate of the wearable terminal.
  • a sensor that detects a sight line For example, a sensor that detects a sight line, a motion sensor, an acceleration sensor, and the like.
  • the instruction manual display means may display the instruction manual as the augmented reality on the display plate of the wearable terminal in accordance with only the selection object seen through the display plate. Since the instruction manual is displayed as augmented reality according to only the selected selection object, the instruction manual can be grasped at pinpoint. If the instruction manual is displayed on all the specified explanations, the display screen may be bothersome. [Description of operation]
  • the wearable terminal display method of the present invention is a method of displaying the collected instruction manual as an augmented reality on an explanatory object which is seen through the display plate on a display plate of the wearable terminal.
  • the wearable terminal display method includes an image acquisition step, a identification step, a collection step, and an instruction manual display step. Although not shown in the drawings, similarly, a determination step, a change step, a detection step, an action result display step, a position direction acquisition step, an estimation step, a guideline display step, and a selection acceptance step may be provided.
  • the image acquisition step acquires an image of an explanation object which is in the view of the wearable terminal.
  • An image captured by the camera of the wearable terminal may be acquired. Or even if it is except a wearable terminal, if such an image can be acquired, it does not matter.
  • the image may be a moving image or a still image. In order to display the instruction manual in real time, a real time image is preferable.
  • the image is analyzed to identify an explanation object. For example, it is specified whether the description target is the dehumidifier X of company A, the medical instrument Y of company B, the game Z of company C, and the like.
  • the subject of explanation is not limited to these.
  • the target of explanation can be specified from color, shape, size, characters, marks, and the like.
  • only the explanation objects in the center of the view of the wearable terminal may be specified. By identifying only the explanatory object in the center of the field of view, the time required for identification can be significantly reduced.
  • Machine learning may improve the accuracy of image analysis. For example, machine learning is performed using a past image to be explained as teacher data.
  • the collecting step collects an instruction manual according to an explanation object.
  • You may collect the instruction manual according to the description target with reference to the database with which the instruction manual was registered previously.
  • the instruction manual may be collected by accessing Web content linked in advance to the explanation object. For example, it can be collected from Web content by assigning a URL or the like that links an explanation object to an instruction manual.
  • the instruction manual may be collected from the Web content searched by searching the explanation target via the Internet. For example, since there is a case where the instruction manual is posted on the website to be explained, it can be collected from the Internet search. Or, there are cases where you can collect instruction manuals from social networking service (SNS) or word-of-mouth sites.
  • SNS social networking service
  • the instruction manual display step displays the instruction manual as the augmented reality on the display object of the wearable terminal for the explanation object which is seen through the display substrate.
  • an instruction manual drawn by a broken line is displayed as an augmented reality on an explanatory object drawn by a solid line seen through the display plate on a display board of the wearable terminal.
  • solid lines are real and broken lines are augmented reality.
  • the instruction manual displayed as augmented reality may be displayed so as to overlap with the explanation object seen through the display plate, but since the explanation object becomes difficult to see, the display manual ON / OFF can be switched. May be
  • the determination step determines whether or not the displayed instruction manual has been viewed. It is also possible to determine whether the instruction manual has been browsed by acquiring an image being browsed and performing image analysis on the image in which the instruction manual is photographed. Further, it may be determined whether the instruction manual has been browsed from the sensor information of the wearable terminal, the sensor information worn by the viewer, or the like. For example, a sensor that detects a sight line, a motion sensor, an acceleration sensor, and the like.
  • the change step changes the instruction manual to viewed if it is determined to have been viewed, and changes the degree of attention so that the instruction manual is to be viewed if it is determined to not be viewed. By doing this, it is possible to visually grasp which instruction manual has been viewed or not.
  • the document may be viewed by putting a check in the check box of the instruction manual.
  • the instruction manual may be viewed by pressing a stamp.
  • the attention level may be changed by changing the color and size of the instruction manual or by pressing a stamp so that the instruction manual is noticeable.
  • the detection step detects an action on the displayed instruction manual.
  • the action is, for example, a gesture, hand movement, gaze movement, or the like.
  • the action on the instruction manual may be detected from the sensor information of the wearable terminal, the sensor information attached to the viewer, or the like.
  • a sensor that detects a sight line, a motion sensor, an acceleration sensor, and the like may be detected from the sensor information of the wearable terminal, the sensor information attached to the viewer, or the like. For example, a sensor that detects a sight line, a motion sensor, an acceleration sensor, and the like.
  • the action result display step displays, on the display board of the wearable terminal, the result according to the action as the augmented reality for the explanation object which is seen through the display board.
  • the display of the instruction manual may be turned off.
  • the link may be opened upon detecting an action for opening the link attached to the instruction manual.
  • the page may be turned.
  • other actions may be used.
  • the position direction step acquires the terminal position and the imaging direction of the wearable terminal.
  • the terminal position can be acquired from the GPS (Global Positioning System) of the wearable terminal.
  • the imaging direction can be acquired from the geomagnetic sensor or the acceleration sensor of the wearable terminal when imaging is performed by the wearable terminal. You may acquire from other than these.
  • the estimation step estimates an explanation target position of the explanation target based on the terminal position and the imaging direction. If the terminal position and the imaging direction are known, the explanation object position of the imaged explanation object can be estimated.
  • the specifying step may specify the explanation target from the explanation target position and the image analysis. Specific accuracy can be improved by using location information. For example, if it is possible to improve the accuracy of identifying the dehumidifier X manufactured by company A in 2016 by the position information, the reliability of the instruction manual to be displayed correspondingly is also improved.
  • the guideline display step displays a guideline for imaging an explanation object as an augmented reality on the display board of the wearable terminal.
  • guidelines such as a frame or a cross may be displayed. It will be easier to analyze the image by having the image taken according to the guidelines.
  • the acquisition step may acquire an image captured along a guideline.
  • the explanation target can be efficiently specified.
  • the selection receiving step receives selection of a selection target for an explanation target which is seen through the display plate of the wearable terminal.
  • the selection of the selection target may be accepted by looking at the explanation target which is seen through the display plate of the wearable terminal for a predetermined time.
  • the user may touch an explanation object seen through the display plate of the wearable terminal to accept the selection of the selection object.
  • the selection of the selection target may be received by placing the cursor on the explanation target which is seen through the display plate of the wearable terminal.
  • a sensor that detects a sight line For example, a sensor that detects a sight line, a motion sensor, an acceleration sensor, and the like.
  • the instruction manual may be displayed as the augmented reality on the display plate of the wearable terminal in accordance with only the selection object that can be seen through the display plate. Since the instruction manual is displayed as augmented reality according to only the selected selection object, the instruction manual can be grasped at pinpoint. If the instruction manual is displayed on all the specified explanations, the display screen may be bothersome.
  • the above-described means and functions are realized by a computer (including a CPU, an information processing device, and various terminals) reading and executing a predetermined program.
  • the program may be, for example, an application installed on a computer, or a SaaS (software as a service) provided from a computer via a network, for example, a flexible disk, a CD It may be provided in the form of being recorded in a computer readable recording medium such as a CD-ROM or the like, a DVD (DVD-ROM, DVD-RAM or the like).
  • the computer reads the program from the recording medium, transfers the program to the internal storage device or the external storage device, stores it, and executes it.
  • the program may be recorded in advance in a storage device (recording medium) such as, for example, a magnetic disk, an optical disk, or a magneto-optical disk, and may be provided from the storage device to the computer via a communication line.
  • nearest neighbor method naive Bayes method
  • decision tree naive Bayes method
  • support vector machine e.g., support vector machine
  • reinforcement learning e.g., reinforcement learning, etc.
  • deep learning may be used in which feature quantities for learning are generated by using a neural network.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

[Problem] To specify an explanation subject from an image in a visual field of a wearable terminal, and display an instruction manual collected in accordance with the explanation subject on a display panel of the wearable terminal as augmented reality. [Solution] Provided is a wearable terminal display system for displaying an instruction manual of an explanation subject on a display panel of a wearable terminal, the wearable terminal display system comprising: an image acquisition means for acquiring an image of an explanation subject that has appeared in a visual field of the wearable terminal; a specifying means for performing image analysis on the image to specify the explanation subject; a collection means for collecting an instruction manual of the explanation subject; and an instruction manual display means for displaying, on a display panel of the wearable terminal, the instruction manual for the explanation subject that is seen through the display panel, the instruction manual being displayed as augmented reality.

Description

ウェアラブル端末表示システム、ウェアラブル端末表示方法およびプログラムWearable terminal display system, wearable terminal display method and program
 本発明は、ウェアラブル端末の表示板に、表示板を透過して見える説明対象に対して、収集された説明対象の取扱説明書を拡張現実として表示するウェアラブル端末表示システム、ウェアラブル端末表示方法およびプログラムに関する。 The present invention relates to a wearable terminal display system, a wearable terminal display method, and a program for displaying an instruction manual of the collected explanation objects as an augmented reality for an explanation object seen through the display plate on a display plate of a wearable terminal. About.
 近年、取扱説明書のIT化が進んでいる。例えば、自装置の電子取扱説明書を自装置の状態に合わせて従来よりも洗練された形で提示可能な表示制御装置を実現する装置が提供されている(特許文献1)。 In recent years, the use of IT in instruction manuals has progressed. For example, there is provided an apparatus for realizing a display control apparatus capable of presenting an electronic instruction manual of the own apparatus in a more sophisticated form than in the past according to the state of the own apparatus (Patent Document 1).
特開2013-120455号公報JP, 2013-120455, A
 しかしながら、特許文献1のシステムは、自装置以外の様々な対象の取扱説明書を表示できない問題がある。 However, the system of Patent Document 1 has a problem that it can not display instruction manuals of various objects other than its own device.
 本発明は、上記課題に鑑み、ウェアラブル端末の視界の画像から説明対象を特定して、説明対象に応じて収集した取扱説明書をウェアラブル端末の表示板に拡張現実として表示するウェアラブル端末表示システム、ウェアラブル端末表示方法およびプログラムを提供することを目的とする。 In view of the above problems, the present invention identifies a target to be described from an image of a field of view of the wearable terminal, and displays an instruction manual collected according to the target on a display plate of the wearable terminal as augmented reality. An object of the present invention is to provide a wearable terminal display method and program.
 本発明では、以下のような解決手段を提供する。 The present invention provides the following solutions.
 第1の特徴に係る発明は、ウェアラブル端末の表示板に、説明対象の取扱説明書を表示するウェアラブル端末表示システムであって、ウェアラブル端末の視界に入った説明対象の画像を取得する画像取得手段と、前記画像を画像解析して、前記説明対象を特定する特定手段と、前記説明対象の取扱説明書を収集する収集手段と、前記ウェアラブル端末の表示板に、前記表示板を透過して見える前記説明対象に対して、前記取扱説明書を拡張現実として表示する取扱説明書表示手段と、を備えるウェアラブル端末表示システムを提供する。 The invention according to the first aspect is a wearable terminal display system for displaying an instruction manual of an explanation object on a display board of the wearable terminal, and an image acquisition means for acquiring an image of the explanation object within the field of view of the wearable terminal. And the identification means for analyzing the image to identify the explanation object, the acquisition means for collecting the instruction manual for the explanation object, and the display plate of the wearable terminal to be seen through the display plate A wearable terminal display system comprising: instruction manual display means for displaying the instruction manual as augmented reality for the object of explanation.
 第1の特徴に係る発明は、ウェアラブル端末の表示板に、説明対象の取扱説明書を表示するウェアラブル端末表示方法であって、ウェアラブル端末の視界に入った説明対象の画像を取得する画像取得ステップと、前記画像を画像解析して、前記説明対象を特定する特定ステップと、前記説明対象の取扱説明書を収集する収集ステップと、前記ウェアラブル端末の表示板に、前記表示板を透過して見える前記説明対象に対して、前記取扱説明書を拡張現実として表示する取扱説明書表示ステップと、を備えるウェアラブル端末表示方法を提供する。 The invention according to the first aspect is a wearable terminal display method for displaying an instruction manual for an explanation object on a display board of the wearable terminal, wherein an image acquisition step for acquiring an image of the explanation object within the field of view of the wearable terminal The image analysis of the image to identify the explanation target, the collection step collecting the instruction manual for the explanation target, and the display plate of the wearable terminal viewed through the display plate And a instruction manual display step of displaying the instruction manual as augmented reality for the object of explanation.
 第1の特徴に係る発明は、コンピュータに、ウェアラブル端末の視界に入った説明対象の画像を取得する画像取得ステップと、前記画像を画像解析して、前記説明対象を特定する特定ステップと、前記説明対象の取扱説明書を収集する収集ステップと、前記ウェアラブル端末の表示板に、前記表示板を透過して見える前記説明対象に対して、前記取扱説明書を拡張現実として表示する取扱説明書表示ステップと、をさせるためのプログラムを提供する。 The invention according to the first aspect comprises a computer, an image acquisition step of acquiring an image of an explanation object within the field of view of the wearable terminal, an image analysis of the image to specify the explanation object, and An instruction manual display for displaying the instruction manual as the augmented reality for the explanation object which is seen through the display plate on the display plate of the wearable terminal, and a collection step for collecting the instruction manual for the explanation object Provide a program for making the steps.
 説明対象をウェアラブル端末の視界に入れるだけで、ウェアラブル端末の表示板に説明対象の取扱説明書を表示できる。 It is possible to display the instruction manual of the explanation object on the display plate of the wearable terminal only by putting the explanation object in the view of the wearable terminal.
図1は、ウェアラブル端末表示システムの概要図である。FIG. 1 is a schematic view of a wearable terminal display system. 図2は、ウェアラブル端末の表示板に説明対象の取扱説明書を収集して表示した一例である。FIG. 2: is an example which collected and displayed the instruction manual of description object on the display board of a wearable terminal.
 以下、本発明を実施するための最良の形態について説明する。なお、これはあくまでも一例であって、本発明の技術的範囲はこれに限られるものではない。 The best mode for carrying out the present invention will be described below. This is merely an example, and the technical scope of the present invention is not limited to this.
 本発明のウェアラブル端末表示システムは、ウェアラブル端末の表示板に、表示板を透過して見える説明対象に対して、収集された取扱説明書を拡張現実として表示するシステムである。ウェアラブル端末とはスマートグラスやヘッドマウントディスプレイなどの視界がある端末のことをいう。 The wearable terminal display system according to the present invention is a system for displaying the collected instruction manual as an augmented reality on the display target of the wearable terminal with respect to an explanatory object which is seen through the display board. A wearable terminal is a terminal with a view such as a smart glass or a head mounted display.
 本発明の好適な実施形態の概要について、図1に基づいて説明する。図1は、本発明の好適な実施形態であるウェアラブル端末表示システムの概要図である。 An outline of a preferred embodiment of the present invention will be described based on FIG. FIG. 1 is a schematic view of a wearable terminal display system according to a preferred embodiment of the present invention.
 図1にあるように、ウェアラブル端末表示システムは、制御部が所定のプログラムを読み込むことで実現される、画像取得手段、特定手段、収集手段、取扱説明書表示手段、を備える。また図示しないが、同様に、判定手段、変更手段、検出手段、アクション結果表示手段、位置方向取得手段、推測手段、ガイドライン表示手段、選択受付手段、を備えてもよい。これらは、アプリケーション型、クラウド型またはその他であってもよい。上述の各手段が、単独のコンピュータで実現されてもよいし、2台以上のコンピュータ(例えば、サーバと端末のような場合)で実現されてもよい。 As shown in FIG. 1, the wearable terminal display system includes an image acquisition unit, an identification unit, a collection unit, an instruction manual display unit, which is realized by the control unit reading a predetermined program. Although not shown, similarly, determination means, change means, detection means, action result display means, position direction acquisition means, estimation means, guideline display means, selection acceptance means may be provided. These may be application based, cloud based or otherwise. Each means described above may be realized by a single computer or may be realized by two or more computers (for example, in the case of a server and a terminal).
 画像取得手段は、ウェアラブル端末の視界に入った説明対象の画像を取得する。ウェアラブル端末のカメラで撮像された画像を取得してもよい。または、ウェアラブル端末以外であっても、このような画像を取得できるのであれば、それでも構わない。画像とは動画でも静止画でもよい。リアルタイムに取扱説明書を表示するためには、リアルタイムな画像の方が好ましい。 An image acquisition means acquires the image of the description object which entered into the view of a wearable terminal. An image captured by the camera of the wearable terminal may be acquired. Or even if it is except a wearable terminal, if such an image can be acquired, it does not matter. The image may be a moving image or a still image. In order to display the instruction manual in real time, a real time image is preferable.
 特定手段は、画像を画像解析して説明対象を特定する。例えば、説明対象が、A社の除湿器Xであるのか、B社の医療器具Yであるのか、C社のゲームZであるのか、などを特定する。説明対象はこれらに限らない。色、形、大きさ、文字、マーク、などから説明対象を特定できる。また、映った説明対象の全てを特定してしまうと時間が掛かる場合には、ウェアラブル端末の視界の中央にある説明対象だけを特定してもよい。視界の中央にある説明対象だけを特定することで、特定に要する時間を大幅に短縮できる。機械学習によって画像解析の精度を向上させてもよい。例えば、説明対象の過去画像を教師データとして機械学習を行う。 The identification means analyzes the image to identify the explanation object. For example, it is specified whether the description target is the dehumidifier X of company A, the medical instrument Y of company B, the game Z of company C, and the like. The subject of explanation is not limited to these. The target of explanation can be specified from color, shape, size, characters, marks, and the like. In addition, if it takes time to specify all the described explanation objects, only the explanation objects in the center of the view of the wearable terminal may be specified. By identifying only the explanatory object in the center of the field of view, the time required for identification can be significantly reduced. Machine learning may improve the accuracy of image analysis. For example, machine learning is performed using a past image to be explained as teacher data.
 収集手段は、説明対象に応じた取扱説明書を収集する。予め取扱説明書が登録されたデータベースを参照して説明対象に応じた取扱説明書を収集してもよい。また、説明対象に予め紐付けられたWebコンテンツにアクセスして取扱説明書を収集してもよい。例えば、説明対象と取扱説明書とを紐づけるURLなどを割当てることでWebコンテンツから収集できる。また、説明対象をインターネット検索して検索されたWebコンテンツから取扱説明書を収集してもよい。例えば、説明対象のホームページに取扱説明書が掲載されているケースがあるので、インターネット検索から収集できる。または、SNS(social networking service)や口コミサイトなどから、取扱説明書を収集できることもある。 The collecting means collects an instruction manual according to the subject of explanation. You may collect the instruction manual according to the description target with reference to the database with which the instruction manual was registered previously. Also, the instruction manual may be collected by accessing Web content linked in advance to the explanation object. For example, it can be collected from Web content by assigning a URL or the like that links an explanation object to an instruction manual. In addition, the instruction manual may be collected from the Web content searched by searching the explanation target via the Internet. For example, since there is a case where the instruction manual is posted on the website to be explained, it can be collected from the Internet search. Or, there are cases where you can collect instruction manuals from social networking service (SNS) or word-of-mouth sites.
 取扱説明書表示手段は、ウェアラブル端末の表示板に、表示板を透過して見える説明対象に対して取扱説明書を拡張現実として表示する。例えば図2にあるように、ウェアラブル端末の表示板に、表示板を透過して見える実線で描かれた説明対象に対して、破線で描かれた取扱説明書を拡張現実として表示している。ここでは理解のために、実線は実物、破線は拡張現実、としている。表示板を透過して見える実線で描かれた説明対象に対して取扱説明書を拡張現実で表示することで、説明対象にどのような取扱説明書があるのかを視覚的に把握することが出来る。拡張現実として表示する取扱説明書は、表示板を透過して見える説明対象に重なるように表示しても良いが、説明対象が見づらくなるので、取扱説明書の表示ON/OFFを切り替えられるようにしてもよい。 The instruction manual display means displays the instruction manual as the augmented reality on the display object of the wearable terminal for the explanation object which is seen through the display substrate. For example, as shown in FIG. 2, an instruction manual drawn by a broken line is displayed as an augmented reality on an explanatory object drawn by a solid line seen through the display plate on a display board of the wearable terminal. Here, for the sake of understanding, solid lines are real and broken lines are augmented reality. By displaying the instruction manual in the augmented reality for the explanation object drawn as a solid line that can be seen through the display board, it is possible to visually grasp what kind of instruction manual exists in the explanation object . The instruction manual displayed as augmented reality may be displayed so as to overlap with the explanation object seen through the display plate, but since the explanation object becomes difficult to see, the display manual ON / OFF can be switched. May be
 判定手段は、表示された取扱説明書が閲覧されたかどうかを判定する。閲覧中の画像を取得して画像解析をすることで、取扱説明書が閲覧されたかどうかを判定してもよい。また、ウェアラブル端末のセンサ情報や、閲覧者に装着されたセンサ情報などから、取扱説明書が閲覧されたかどうかを判定してもよい。例えば、視線を検知するセンサ、モーションセンサ、加速度センサなど。 The determination means determines whether or not the displayed instruction manual has been viewed. It may be determined whether the instruction manual has been browsed by acquiring the image being browsed and analyzing the image. Further, it may be determined whether the instruction manual has been browsed from the sensor information of the wearable terminal, the sensor information worn by the viewer, or the like. For example, a sensor that detects a sight line, a motion sensor, an acceleration sensor, and the like.
 変更手段は、閲覧されたと判定された場合は取扱説明書を閲覧済みに変更し、閲覧されていないと判定された場合は取扱説明書が閲覧されるように注目度を変更する。このようにすることで、どの取扱説明書が、閲覧されたのか、閲覧されていないのか、を視覚的に把握できる。例えば、取扱説明書のチェックボックスにチェックを入れることで閲覧済としてもよい。例えば、取扱説明書にスタンプを押すことで閲覧済としてもよい。また、注目度の変更は、取扱説明書の色・サイズを変更したり、取扱説明書が目立つようにスタンプを押したりしてもよい。 The change means changes the instruction manual to viewed if it is determined to have been viewed, and changes the degree of attention so that the instruction manual is to be viewed if it is determined to not be viewed. By doing this, it is possible to visually grasp which instruction manual has been viewed or not. For example, the document may be viewed by putting a check in the check box of the instruction manual. For example, the instruction manual may be viewed by pressing a stamp. The attention level may be changed by changing the color and size of the instruction manual or by pressing a stamp so that the instruction manual is noticeable.
 検出手段は、表示された取扱説明書に対するアクションを検出する。アクションは、例えば、ジェスチャーや、手の動き、視線の動き、などである。閲覧中の画像を取得して画像解析をすることで、取扱説明書に対するアクションを検出できる。また、ウェアラブル端末のセンサ情報や、閲覧者に装着されたセンサ情報などから、取扱説明書に対するアクションを検出してもよい。例えば、視線を検知するセンサ、モーションセンサ、加速度センサなど。 The detection means detects an action on the displayed instruction manual. The action is, for example, a gesture, hand movement, gaze movement, or the like. By acquiring the image being browsed and analyzing the image, it is possible to detect the action on the instruction manual. Further, the action on the instruction manual may be detected from the sensor information of the wearable terminal, the sensor information attached to the viewer, or the like. For example, a sensor that detects a sight line, a motion sensor, an acceleration sensor, and the like.
 アクション結果表示手段は、ウェアラブル端末の表示板に、表示板を透過して見える説明対象に対して、アクションに応じた結果を拡張現実として表示する。例えば、取扱説明書を消すアクションを検出したら取扱説明書の表示を消してよい。例えば、取扱説明書に付けられたリンクを開くアクションを検出したらリンクを開いてもよい。例えば、取扱説明書のページをめくるアクションを検出したらページをめくってもよい。もちろん他のアクションでもよい。 The action result display means displays the result according to the action as the augmented reality on the display object of the wearable terminal for the explanation object seen through the display device. For example, when an action to delete the instruction manual is detected, the display of the instruction manual may be turned off. For example, the link may be opened upon detecting an action for opening the link attached to the instruction manual. For example, when an action to turn a page of the instruction manual is detected, the page may be turned. Of course, other actions may be used.
 位置方向手段は、ウェアラブル端末の端末位置と撮像方向とを取得する。例えば、端末位置は、ウェアラブル端末のGPS(Global Positioning System)から取得できる。例えば、撮像方向は、ウェアラブル端末で撮像する場合は、ウェアラブル端末の地磁気センサや加速度センサから取得できる。これら以外から取得してもよい。 The position / direction unit acquires the terminal position and the imaging direction of the wearable terminal. For example, the terminal position can be acquired from the GPS (Global Positioning System) of the wearable terminal. For example, the imaging direction can be acquired from the geomagnetic sensor or the acceleration sensor of the wearable terminal when imaging is performed by the wearable terminal. You may acquire from other than these.
 推測手段は、端末位置と撮像方向とに基づいて、説明対象の説明対象位置を推測する。端末位置と撮像方向が分かっていれば、撮像された説明対象の説明対象位置を推測することができる。 The estimation unit estimates an explanation target position of the explanation target based on the terminal position and the imaging direction. If the terminal position and the imaging direction are known, the explanation object position of the imaged explanation object can be estimated.
 また、特定手段は、説明対象位置と画像解析とから、説明対象を特定してもよい。位置の情報を利用することで特定の精度を向上することができる。例えば、位置の情報によって、2016年に製造されたA社の除湿器Xだと特定する精度が向上できれば、それに対応して表示する取扱説明の信頼度も向上する。 Further, the specification means may specify the explanation object from the explanation object position and the image analysis. Specific accuracy can be improved by using location information. For example, if it is possible to improve the accuracy of identifying the dehumidifier X manufactured by company A in 2016 by the position information, the reliability of the instruction manual to be displayed correspondingly is also improved.
 ガイドライン表示手段は、ウェアラブル端末の表示板に、説明対象を撮像するためのガイドラインを拡張現実として表示する。例えば、枠や十字などのガイドラインを表示してもよい。ガイドラインに沿って撮像してもらうことで画像解析がしやすくなる。 The guideline display means displays a guideline for imaging an explanation object as an augmented reality on the display board of the wearable terminal. For example, guidelines such as a frame or a cross may be displayed. It will be easier to analyze the image by having the image taken according to the guidelines.
 また、取得手段は、ガイドラインに沿って撮像された画像を取得してもよい。ガイドラインに沿って撮像された画像だけを取得して画像解析することで、効率良く説明対象を特定できる。 In addition, the acquisition unit may acquire an image captured along a guideline. By acquiring and analyzing only the image captured along the guidelines, the explanation target can be efficiently specified.
 選択受付手段は、ウェアラブル端末の表示板を透過して見える説明対象に対して、選択対象の選択を受け付ける。例えば、ウェアラブル端末の表示板を透過して見える説明対象を一定時間見ることで選択対象の選択を受け付けてもよい。例えば、ウェアラブル端末の表示板を透過して見える説明対象にタッチして選択対象の選択を受け付けてもよい。例えば、ウェアラブル端末の表示板を透過して見える説明対象にカーソルを合わせることで選択対象の選択を受け付けてもよい。例えば、視線を検知するセンサ、モーションセンサ、加速度センサなど。 The selection receiving unit receives a selection of a selection target for an explanation target which is seen through the display plate of the wearable terminal. For example, the selection of the selection target may be accepted by looking at the explanation target which is seen through the display plate of the wearable terminal for a predetermined time. For example, the user may touch an explanation object seen through the display plate of the wearable terminal to accept the selection of the selection object. For example, the selection of the selection target may be received by placing the cursor on the explanation target which is seen through the display plate of the wearable terminal. For example, a sensor that detects a sight line, a motion sensor, an acceleration sensor, and the like.
 また、取扱説明書表示手段は、ウェアラブル端末の表示板に、表示板を透過して見える選択対象にだけ合わせて、取扱説明書を拡張現実として表示してもよい。選択された選択対象にだけ合わせて取扱説明書を拡張現実として表示するので、ピンポイントに取扱説明書を把握することができる。特定された全ての説明対象に取扱説明書を表示すると表示板の画面が煩わしくなることがある。
[動作の説明]
In addition, the instruction manual display means may display the instruction manual as the augmented reality on the display plate of the wearable terminal in accordance with only the selection object seen through the display plate. Since the instruction manual is displayed as augmented reality according to only the selected selection object, the instruction manual can be grasped at pinpoint. If the instruction manual is displayed on all the specified explanations, the display screen may be bothersome.
[Description of operation]
 次に、ウェアラブル端末表示方法について説明する。本発明のウェアラブル端末表示方法は、ウェアラブル端末の表示板に、表示板を透過して見える説明対象に対して、収集された取扱説明書を拡張現実として表示する方法である。 Next, the wearable terminal display method will be described. The wearable terminal display method of the present invention is a method of displaying the collected instruction manual as an augmented reality on an explanatory object which is seen through the display plate on a display plate of the wearable terminal.
 ウェアラブル端末表示方法は、画像取得ステップ、特定ステップ、収集ステップ、取扱説明書表示ステップ、を備える。また図示しないが、同様に、判定ステップ、変更ステップ、検出ステップ、アクション結果表示ステップ、位置方向取得ステップ、推測ステップ、ガイドライン表示ステップ、選択受付ステップ、を備えてもよい。 The wearable terminal display method includes an image acquisition step, a identification step, a collection step, and an instruction manual display step. Although not shown in the drawings, similarly, a determination step, a change step, a detection step, an action result display step, a position direction acquisition step, an estimation step, a guideline display step, and a selection acceptance step may be provided.
 画像取得ステップは、ウェアラブル端末の視界に入った説明対象の画像を取得する。ウェアラブル端末のカメラで撮像された画像を取得してもよい。または、ウェアラブル端末以外であっても、このような画像を取得できるのであれば、それでも構わない。画像とは動画でも静止画でもよい。リアルタイムに取扱説明書を表示するためには、リアルタイムな画像の方が好ましい。 The image acquisition step acquires an image of an explanation object which is in the view of the wearable terminal. An image captured by the camera of the wearable terminal may be acquired. Or even if it is except a wearable terminal, if such an image can be acquired, it does not matter. The image may be a moving image or a still image. In order to display the instruction manual in real time, a real time image is preferable.
 特定ステップは、画像を画像解析して説明対象を特定する。例えば、説明対象が、A社の除湿器Xであるのか、B社の医療器具Yであるのか、C社のゲームZであるのか、などを特定する。説明対象はこれらに限らない。色、形、大きさ、文字、マーク、などから説明対象を特定できる。また、映った説明対象の全てを特定してしまうと時間が掛かる場合には、ウェアラブル端末の視界の中央にある説明対象だけを特定してもよい。視界の中央にある説明対象だけを特定することで、特定に要する時間を大幅に短縮できる。機械学習によって画像解析の精度を向上させてもよい。例えば、説明対象の過去画像を教師データとして機械学習を行う。 In the identification step, the image is analyzed to identify an explanation object. For example, it is specified whether the description target is the dehumidifier X of company A, the medical instrument Y of company B, the game Z of company C, and the like. The subject of explanation is not limited to these. The target of explanation can be specified from color, shape, size, characters, marks, and the like. In addition, if it takes time to specify all the described explanation objects, only the explanation objects in the center of the view of the wearable terminal may be specified. By identifying only the explanatory object in the center of the field of view, the time required for identification can be significantly reduced. Machine learning may improve the accuracy of image analysis. For example, machine learning is performed using a past image to be explained as teacher data.
 収集ステップは、説明対象に応じた取扱説明書を収集する。予め取扱説明書が登録されたデータベースを参照して説明対象に応じた取扱説明書を収集してもよい。また、説明対象に予め紐付けられたWebコンテンツにアクセスして取扱説明書を収集してもよい。例えば、説明対象と取扱説明書とを紐づけるURLなどを割当てることでWebコンテンツから収集できる。また、説明対象をインターネット検索して検索されたWebコンテンツから取扱説明書を収集してもよい。例えば、説明対象のホームページに取扱説明書が掲載されているケースがあるので、インターネット検索から収集できる。または、SNS(social networking service)や口コミサイトなどから、取扱説明書を収集できることもある。 The collecting step collects an instruction manual according to an explanation object. You may collect the instruction manual according to the description target with reference to the database with which the instruction manual was registered previously. Also, the instruction manual may be collected by accessing Web content linked in advance to the explanation object. For example, it can be collected from Web content by assigning a URL or the like that links an explanation object to an instruction manual. In addition, the instruction manual may be collected from the Web content searched by searching the explanation target via the Internet. For example, since there is a case where the instruction manual is posted on the website to be explained, it can be collected from the Internet search. Or, there are cases where you can collect instruction manuals from social networking service (SNS) or word-of-mouth sites.
 取扱説明書表示ステップは、ウェアラブル端末の表示板に、表示板を透過して見える説明対象に対して取扱説明書を拡張現実として表示する。例えば図2にあるように、ウェアラブル端末の表示板に、表示板を透過して見える実線で描かれた説明対象に対して、破線で描かれた取扱説明書を拡張現実として表示している。ここでは理解のために、実線は実物、破線は拡張現実、としている。表示板を透過して見える実線で描かれた説明対象に対して取扱説明書を拡張現実で表示することで、説明対象にどのような取扱説明書があるのかを視覚的に把握することが出来る。拡張現実として表示する取扱説明書は、表示板を透過して見える説明対象に重なるように表示しても良いが、説明対象が見づらくなるので、取扱説明書の表示ON/OFFを切り替えられるようにしてもよい。 The instruction manual display step displays the instruction manual as the augmented reality on the display object of the wearable terminal for the explanation object which is seen through the display substrate. For example, as shown in FIG. 2, an instruction manual drawn by a broken line is displayed as an augmented reality on an explanatory object drawn by a solid line seen through the display plate on a display board of the wearable terminal. Here, for the sake of understanding, solid lines are real and broken lines are augmented reality. By displaying the instruction manual in the augmented reality for the explanation object drawn as a solid line that can be seen through the display board, it is possible to visually grasp what kind of instruction manual exists in the explanation object . The instruction manual displayed as augmented reality may be displayed so as to overlap with the explanation object seen through the display plate, but since the explanation object becomes difficult to see, the display manual ON / OFF can be switched. May be
 判定ステップは、表示された取扱説明書が閲覧されたかどうかを判定する。閲覧中の画像を取得して、取扱説明書が撮影された画像を画像解析をすることで、取扱説明書が閲覧されたかどうかを判定してもよい。また、ウェアラブル端末のセンサ情報や、閲覧者に装着されたセンサ情報などから、取扱説明書が閲覧されたかどうかを判定してもよい。例えば、視線を検知するセンサ、モーションセンサ、加速度センサなど。 The determination step determines whether or not the displayed instruction manual has been viewed. It is also possible to determine whether the instruction manual has been browsed by acquiring an image being browsed and performing image analysis on the image in which the instruction manual is photographed. Further, it may be determined whether the instruction manual has been browsed from the sensor information of the wearable terminal, the sensor information worn by the viewer, or the like. For example, a sensor that detects a sight line, a motion sensor, an acceleration sensor, and the like.
 変更ステップは、閲覧されたと判定された場合は取扱説明書を閲覧済みに変更し、閲覧されていないと判定された場合は取扱説明書が閲覧されるように注目度を変更する。このようにすることで、どの取扱説明書が、閲覧されたのか、閲覧されていないのか、を視覚的に把握できる。例えば、取扱説明書のチェックボックスにチェックを入れることで閲覧済としてもよい。例えば、取扱説明書にスタンプを押すことで閲覧済としてもよい。また、注目度の変更は、取扱説明書の色・サイズを変更したり、取扱説明書が目立つようにスタンプを押したりしてもよい。 The change step changes the instruction manual to viewed if it is determined to have been viewed, and changes the degree of attention so that the instruction manual is to be viewed if it is determined to not be viewed. By doing this, it is possible to visually grasp which instruction manual has been viewed or not. For example, the document may be viewed by putting a check in the check box of the instruction manual. For example, the instruction manual may be viewed by pressing a stamp. The attention level may be changed by changing the color and size of the instruction manual or by pressing a stamp so that the instruction manual is noticeable.
 検出ステップは、表示された取扱説明書に対するアクションを検出する。アクションは、例えば、ジェスチャーや、手の動き、視線の動き、などである。閲覧中の画像を取得して画像解析をすることで、取扱説明書に対するアクションを検出できる。また、ウェアラブル端末のセンサ情報や、閲覧者に装着されたセンサ情報などから、取扱説明書に対するアクションを検出してもよい。例えば、視線を検知するセンサ、モーションセンサ、加速度センサなど。 The detection step detects an action on the displayed instruction manual. The action is, for example, a gesture, hand movement, gaze movement, or the like. By acquiring the image being browsed and analyzing the image, it is possible to detect the action on the instruction manual. Further, the action on the instruction manual may be detected from the sensor information of the wearable terminal, the sensor information attached to the viewer, or the like. For example, a sensor that detects a sight line, a motion sensor, an acceleration sensor, and the like.
 アクション結果表示ステップは、ウェアラブル端末の表示板に、表示板を透過して見える説明対象に対して、アクションに応じた結果を拡張現実として表示する。例えば、取扱説明書を消すアクションを検出したら取扱説明書の表示を消してよい。例えば、取扱説明書に付けられたリンクを開くアクションを検出したらリンクを開いてもよい。例えば、取扱説明書のページをめくるアクションを検出したらページをめくってもよい。もちろん他のアクションでもよい。 The action result display step displays, on the display board of the wearable terminal, the result according to the action as the augmented reality for the explanation object which is seen through the display board. For example, when an action to delete the instruction manual is detected, the display of the instruction manual may be turned off. For example, the link may be opened upon detecting an action for opening the link attached to the instruction manual. For example, when an action to turn a page of the instruction manual is detected, the page may be turned. Of course, other actions may be used.
 位置方向ステップは、ウェアラブル端末の端末位置と撮像方向とを取得する。例えば、端末位置は、ウェアラブル端末のGPS(Global Positioning System)から取得できる。例えば、撮像方向は、ウェアラブル端末で撮像する場合は、ウェアラブル端末の地磁気センサや加速度センサから取得できる。これら以外から取得してもよい。 The position direction step acquires the terminal position and the imaging direction of the wearable terminal. For example, the terminal position can be acquired from the GPS (Global Positioning System) of the wearable terminal. For example, the imaging direction can be acquired from the geomagnetic sensor or the acceleration sensor of the wearable terminal when imaging is performed by the wearable terminal. You may acquire from other than these.
 推測ステップは、端末位置と撮像方向とに基づいて、説明対象の説明対象位置を推測する。端末位置と撮像方向が分かっていれば、撮像された説明対象の説明対象位置を推測することができる。 The estimation step estimates an explanation target position of the explanation target based on the terminal position and the imaging direction. If the terminal position and the imaging direction are known, the explanation object position of the imaged explanation object can be estimated.
 また、特定ステップは、説明対象位置と画像解析とから、説明対象を特定してもよい。位置の情報を利用することで特定の精度を向上することができる。例えば、位置の情報によって、2016年に製造されたA社の除湿器Xだと特定する精度が向上できれば、それに対応して表示する取扱説明の信頼度も向上する。 In addition, the specifying step may specify the explanation target from the explanation target position and the image analysis. Specific accuracy can be improved by using location information. For example, if it is possible to improve the accuracy of identifying the dehumidifier X manufactured by company A in 2016 by the position information, the reliability of the instruction manual to be displayed correspondingly is also improved.
 ガイドライン表示ステップは、ウェアラブル端末の表示板に、説明対象を撮像するためのガイドラインを拡張現実として表示する。例えば、枠や十字などのガイドラインを表示してもよい。ガイドラインに沿って撮像してもらうことで画像解析がしやすくなる。 The guideline display step displays a guideline for imaging an explanation object as an augmented reality on the display board of the wearable terminal. For example, guidelines such as a frame or a cross may be displayed. It will be easier to analyze the image by having the image taken according to the guidelines.
 また、取得ステップは、ガイドラインに沿って撮像された画像を取得してもよい。ガイドラインに沿って撮像された画像だけを取得して画像解析することで、効率良く説明対象を特定できる。 In addition, the acquisition step may acquire an image captured along a guideline. By acquiring and analyzing only the image captured along the guidelines, the explanation target can be efficiently specified.
 選択受付ステップは、ウェアラブル端末の表示板を透過して見える説明対象に対して、選択対象の選択を受け付ける。例えば、ウェアラブル端末の表示板を透過して見える説明対象を一定時間見ることで選択対象の選択を受け付けてもよい。例えば、ウェアラブル端末の表示板を透過して見える説明対象にタッチして選択対象の選択を受け付けてもよい。例えば、ウェアラブル端末の表示板を透過して見える説明対象にカーソルを合わせることで選択対象の選択を受け付けてもよい。例えば、視線を検知するセンサ、モーションセンサ、加速度センサなど。 The selection receiving step receives selection of a selection target for an explanation target which is seen through the display plate of the wearable terminal. For example, the selection of the selection target may be accepted by looking at the explanation target which is seen through the display plate of the wearable terminal for a predetermined time. For example, the user may touch an explanation object seen through the display plate of the wearable terminal to accept the selection of the selection object. For example, the selection of the selection target may be received by placing the cursor on the explanation target which is seen through the display plate of the wearable terminal. For example, a sensor that detects a sight line, a motion sensor, an acceleration sensor, and the like.
 また、取扱説明書表示ステップは、ウェアラブル端末の表示板に、表示板を透過して見える選択対象にだけ合わせて、取扱説明書を拡張現実として表示してもよい。選択された選択対象にだけ合わせて取扱説明書を拡張現実として表示するので、ピンポイントに取扱説明書を把握することができる。特定された全ての説明対象に取扱説明書を表示すると表示板の画面が煩わしくなることがある。 In the instruction manual display step, the instruction manual may be displayed as the augmented reality on the display plate of the wearable terminal in accordance with only the selection object that can be seen through the display plate. Since the instruction manual is displayed as augmented reality according to only the selected selection object, the instruction manual can be grasped at pinpoint. If the instruction manual is displayed on all the specified explanations, the display screen may be bothersome.
 上述した手段、機能は、コンピュータ(CPU、情報処理装置、各種端末を含む)が、所定のプログラムを読み込んで、実行することによって実現される。プログラムは、例えば、コンピュータにインストールされるアプリケーションであってもよいし、コンピュータからネットワーク経由で提供されるSaaS(ソフトウェア・アズ・ア・サービス)形態であってもよいし、例えば、フレキシブルディスク、CD(CD-ROMなど)、DVD(DVD-ROM、DVD-RAMなど)等のコンピュータ読取可能な記録媒体に記録された形態で提供されてもよい。この場合、コンピュータはその記録媒体からプログラムを読み取って内部記憶装置または外部記憶装置に転送し記憶して実行する。また、そのプログラムを、例えば、磁気ディスク、光ディスク、光磁気ディスク等の記憶装置(記録媒体)に予め記録しておき、その記憶装置から通信回線を介してコンピュータに提供するようにしてもよい。 The above-described means and functions are realized by a computer (including a CPU, an information processing device, and various terminals) reading and executing a predetermined program. The program may be, for example, an application installed on a computer, or a SaaS (software as a service) provided from a computer via a network, for example, a flexible disk, a CD It may be provided in the form of being recorded in a computer readable recording medium such as a CD-ROM or the like, a DVD (DVD-ROM, DVD-RAM or the like). In this case, the computer reads the program from the recording medium, transfers the program to the internal storage device or the external storage device, stores it, and executes it. Alternatively, the program may be recorded in advance in a storage device (recording medium) such as, for example, a magnetic disk, an optical disk, or a magneto-optical disk, and may be provided from the storage device to the computer via a communication line.
 上述した機械学習の具体的なアルゴリズムとしては、最近傍法、ナイーブベイズ法、決定木、サポートベクターマシン、強化学習などを利用してよい。また、ニューラルネットワークを利用して、学習するための特徴量を自ら生成する深層学習(ディープラーニング)であってもよい。 As a specific algorithm of the above-mentioned machine learning, nearest neighbor method, naive Bayes method, decision tree, support vector machine, reinforcement learning, etc. may be used. In addition, deep learning may be used in which feature quantities for learning are generated by using a neural network.
 以上、本発明の実施形態について説明したが、本発明は上述したこれらの実施形態に限るものではない。また、本発明の実施形態に記載された効果は、本発明から生じる最も好適な効果を列挙したに過ぎず、本発明による効果は、本発明の実施形態に記載されたものに限定されるものではない。

 
As mentioned above, although embodiment of this invention was described, this invention is not limited to these embodiment mentioned above. Further, the effects described in the embodiments of the present invention only list the most preferable effects resulting from the present invention, and the effects according to the present invention are limited to those described in the embodiments of the present invention is not.

Claims (13)

  1.  ウェアラブル端末の表示板に、説明対象の取扱説明書を表示するウェアラブル端末表示システムであって、
     ウェアラブル端末の視界に入った説明対象の画像を取得する画像取得手段と、
     前記画像を画像解析して、前記説明対象を特定する特定手段と、
     前記説明対象の取扱説明書を収集する収集手段と、
     前記ウェアラブル端末の表示板に、前記表示板を透過して見える前記説明対象に対して、前記取扱説明書を拡張現実として表示する取扱説明書表示手段と、を備えるウェアラブル端末表示システム。
    A wearable terminal display system for displaying an instruction manual of an explanation object on a display board of the wearable terminal,
    An image acquisition unit that acquires an image of an explanation object within the field of view of the wearable terminal;
    Image analysis of the image to specify the object of explanation;
    Collection means for collecting the instruction manual for the explanation object;
    A wearable terminal display system, comprising: instruction manual display means for displaying the instruction manual as augmented reality for the explanation object seen through the display plate on a display plate of the wearable terminal.
  2.  前記特定手段は、前記ウェアラブル端末の視界の中央にある説明対象だけを特定する
    請求項1に記載のウェアラブル端末表示システム。
    The wearable terminal display system according to claim 1, wherein the specifying unit specifies only an explanation object located at the center of the view of the wearable terminal.
  3.  前記収集手段は、予め取扱説明書が登録されたデータベースを参照して、前記説明対象の取扱説明書を収集する
    請求項1に記載のウェアラブル端末表示システム。
    The wearable terminal display system according to claim 1, wherein the collection means collects the instruction manual for the explanation target with reference to a database in which the instruction manual is registered in advance.
  4.  前記収集手段は、前記説明対象に予め紐付けられたWebコンテンツにアクセスして、前記取扱説明書を収集する
    請求項1に記載のウェアラブル端末表示システム。
    The wearable terminal display system according to claim 1, wherein the collection means accesses the Web content previously linked to the explanation object to collect the instruction manual.
  5.  前記収集手段は、前記説明対象をインターネット検索して、検索されたWebコンテンツから前記取扱説明書を収集する
    請求項1に記載のウェアラブル端末表示システム。
    The wearable terminal display system according to claim 1, wherein the collection means searches the explanation target via the Internet, and collects the instruction manual from the searched Web content.
  6.  前記表示された取扱説明書が閲覧されたかどうかを判定する判定手段と、
     前記閲覧されたと判定された場合、取扱説明書を閲覧済みに変更する変更手段と、
    を備える請求項1に記載のウェアラブル端末表示システム。
    A determination unit that determines whether the displayed instruction manual has been viewed;
    Change means for changing the instruction manual to read when it is determined that the user has read the document;
    The wearable terminal display system according to claim 1, comprising:
  7.  前記表示された取扱説明書が閲覧されたかどうかを判定する判定手段と、
     前記閲覧されていないと判定された場合、取扱説明書が閲覧されるように注目度を変更する変更手段と、
    を備える請求項1に記載のウェアラブル端末表示システム。
    A determination unit that determines whether the displayed instruction manual has been viewed;
    And changing means for changing the degree of attention so that the instruction manual is browsed if it is determined that the browsing is not performed.
    The wearable terminal display system according to claim 1, comprising:
  8.  前記表示された取扱説明書に対するアクションを検出する検出手段と、
     前記ウェアラブル端末の表示板に、前記表示板を透過して見える前記説明対象に対して、前記アクションに応じた結果を拡張現実として表示するアクション結果表示手段と、
    を備える請求項1に記載のウェアラブル端末表示システム。
    Detection means for detecting an action on the displayed instruction manual;
    Action result display means for displaying the result according to the action as the augmented reality for the explanation target which is seen through the display plate on the display plate of the wearable terminal;
    The wearable terminal display system according to claim 1, comprising:
  9.  前記ウェアラブル端末の、端末位置と撮像方向と、を取得する位置方向取得手段と、
     前記端末位置と前記撮像方向とに基づいて、前記説明対象の説明対象位置を推測する推測手段と、
    を備え、
     前記特定手段は、前記説明対象位置と前記画像解析とから、前記説明対象を特定する
    請求項1に記載のウェアラブル端末表示システム。
    Position direction acquisition means for acquiring a terminal position and an imaging direction of the wearable terminal;
    Estimating means for estimating an explanation target position of the explanation target based on the terminal position and the imaging direction;
    Equipped with
    The wearable terminal display system according to claim 1, wherein the specification unit specifies the description target from the description target position and the image analysis.
  10.  前記ウェアラブル端末の表示板に、前記説明対象を撮像するためのガイドラインを拡張現実として表示するガイドライン表示手段を備え、
     前記取得手段は、前記ガイドラインに沿って撮像された前記画像を取得する
    請求項1に記載のウェアラブル端末表示システム。
    The display board of the wearable terminal is provided with a guideline display means for displaying a guideline for imaging the explanation object as an augmented reality,
    The wearable terminal display system according to claim 1, wherein the acquisition unit acquires the image captured along the guideline.
  11.  前記ウェアラブル端末の表示板を透過して見える前記説明対象に対して、選択対象の選択を受け付ける選択受付手段を備え、
     前記取扱説明書表示手段は、前記ウェアラブル端末の表示板に、前記表示板を透過して見える前記選択対象にだけ合わせて、前記取扱説明書を拡張現実として表示する
    請求項1に記載のウェアラブル端末表示システム。
    And a selection receiving unit configured to receive a selection of a selection target for the description target viewed through the display plate of the wearable terminal.
    The wearable terminal according to claim 1, wherein the instruction manual display unit displays the instruction manual as augmented reality on the display plate of the wearable terminal according to only the selection target viewed through the display plate. Display system.
  12.  ウェアラブル端末の表示板に、説明対象の取扱説明書を表示するウェアラブル端末表示方法であって、
     ウェアラブル端末の視界に入った説明対象の画像を取得する画像取得ステップと、
     前記画像を画像解析して、前記説明対象を特定する特定ステップと、
     前記説明対象の取扱説明書を収集する収集ステップと、
     前記ウェアラブル端末の表示板に、前記表示板を透過して見える前記説明対象に対して、前記取扱説明書を拡張現実として表示する取扱説明書表示ステップと、
    を備えるウェアラブル端末表示方法。
    A wearable terminal display method for displaying an instruction manual of an explanation object on a display board of the wearable terminal,
    An image acquisition step of acquiring an image of an explanation object within the field of view of the wearable terminal;
    Image analysis of the image to identify the description target;
    A collecting step of collecting the instruction manual for the object to be explained;
    An instruction manual display step of displaying the instruction manual as augmented reality for the explanation object which is seen through the display plate on the display plate of the wearable terminal;
    A wearable terminal display method comprising:
  13.  コンピュータに、
     ウェアラブル端末の視界に入った説明対象の画像を取得する画像取得ステップと、
     前記画像を画像解析して、前記説明対象を特定する特定ステップと、
     前記説明対象の取扱説明書を収集する収集ステップと、
     前記ウェアラブル端末の表示板に、前記表示板を透過して見える前記説明対象に対して、前記取扱説明書を拡張現実として表示する取扱説明書表示ステップと、
    を実行させるためのプログラム。
     
    On the computer
    An image acquisition step of acquiring an image of an explanation object within the field of view of the wearable terminal;
    Image analysis of the image to identify the description target;
    A collecting step of collecting the instruction manual for the object to be explained;
    An instruction manual display step of displaying the instruction manual as augmented reality for the explanation object which is seen through the display plate on the display plate of the wearable terminal;
    A program to run a program.
PCT/JP2017/023814 2017-06-28 2017-06-28 Wearable terminal display system, wearable terminal display method, and program WO2019003359A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2017/023814 WO2019003359A1 (en) 2017-06-28 2017-06-28 Wearable terminal display system, wearable terminal display method, and program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2017/023814 WO2019003359A1 (en) 2017-06-28 2017-06-28 Wearable terminal display system, wearable terminal display method, and program

Publications (1)

Publication Number Publication Date
WO2019003359A1 true WO2019003359A1 (en) 2019-01-03

Family

ID=64741211

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2017/023814 WO2019003359A1 (en) 2017-06-28 2017-06-28 Wearable terminal display system, wearable terminal display method, and program

Country Status (1)

Country Link
WO (1) WO2019003359A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002149478A (en) * 2000-08-29 2002-05-24 Fujitsu Ltd Method for automatically displaying update information and device for the same, and medium and program
JP2009008905A (en) * 2007-06-28 2009-01-15 Ricoh Co Ltd Information display apparatus and information display system
JP2011114781A (en) * 2009-11-30 2011-06-09 Brother Industries Ltd Head-mounted display device, and image sharing system employing the same
JP2015153157A (en) * 2014-02-14 2015-08-24 Kddi株式会社 virtual information management system
JP2017049763A (en) * 2015-09-01 2017-03-09 株式会社東芝 Electronic apparatus, support system, and support method
WO2017081920A1 (en) * 2015-11-10 2017-05-18 日本電気株式会社 Information processing device, control method, and program

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002149478A (en) * 2000-08-29 2002-05-24 Fujitsu Ltd Method for automatically displaying update information and device for the same, and medium and program
JP2009008905A (en) * 2007-06-28 2009-01-15 Ricoh Co Ltd Information display apparatus and information display system
JP2011114781A (en) * 2009-11-30 2011-06-09 Brother Industries Ltd Head-mounted display device, and image sharing system employing the same
JP2015153157A (en) * 2014-02-14 2015-08-24 Kddi株式会社 virtual information management system
JP2017049763A (en) * 2015-09-01 2017-03-09 株式会社東芝 Electronic apparatus, support system, and support method
WO2017081920A1 (en) * 2015-11-10 2017-05-18 日本電気株式会社 Information processing device, control method, and program

Similar Documents

Publication Publication Date Title
US20180247361A1 (en) Information processing apparatus, information processing method, wearable terminal, and program
US11630861B2 (en) Method and apparatus for video searching, terminal and storage medium
CN107690657B (en) Trade company is found according to image
CN106164959B (en) Behavioral event measurement system and related method
TWI418763B (en) Mobile imaging device as navigator
KR102093198B1 (en) Method and apparatus for user interface using gaze interaction
KR101925701B1 (en) Determination of attention towards stimuli based on gaze information
US10372958B2 (en) In-field data acquisition and formatting
CA2762662A1 (en) Method for automatic mapping of eye tracker data to hypermedia content
JP2010061218A (en) Web advertising effect measurement device, web advertising effect measurement method, and program
Anagnostopoulos et al. Gaze-Informed location-based services
CN105210009A (en) Display control device, display control method, and recording medium
Göbel et al. FeaturEyeTrack: automatic matching of eye tracking data with map features on interactive maps
JP6887198B2 (en) Wearable device display system, wearable device display method and program
CN103946887A (en) Gaze position estimation system, control method for gaze position estimation system, gaze position estimation device, control method for gaze position estimation device, program, and information storage medium
US20140244566A1 (en) Accurately Estimating the Audience of Digital Content
WO2018198320A1 (en) Wearable terminal display system, wearable terminal display method and program
WO2019021446A1 (en) Wearable terminal display system, wearable terminal display method and program
WO2019003359A1 (en) Wearable terminal display system, wearable terminal display method, and program
Kimura et al. The Reading-Life Log--Technologies to Recognize Texts That We Read
WO2018216221A1 (en) Wearable terminal display system, wearable terminal display method and program
WO2018216220A1 (en) Wearable terminal display system, wearable terminal display method and program
JP6762470B2 (en) Wearable device display system, wearable device display method and program
US9911237B1 (en) Image processing techniques for self-captured images
WO2019021447A1 (en) Wearable terminal display system, wearable terminal display method and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17916148

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17916148

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP