KR101760898B1 - Context awareness based interactive guidance system and control method thereof - Google Patents

Context awareness based interactive guidance system and control method thereof Download PDF

Info

Publication number
KR101760898B1
KR101760898B1 KR1020150121029A KR20150121029A KR101760898B1 KR 101760898 B1 KR101760898 B1 KR 101760898B1 KR 1020150121029 A KR1020150121029 A KR 1020150121029A KR 20150121029 A KR20150121029 A KR 20150121029A KR 101760898 B1 KR101760898 B1 KR 101760898B1
Authority
KR
South Korea
Prior art keywords
information
server
guide
language
area
Prior art date
Application number
KR1020150121029A
Other languages
Korean (ko)
Other versions
KR20170025095A (en
Inventor
허철균
Original Assignee
허철균
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 허철균 filed Critical 허철균
Priority to KR1020150121029A priority Critical patent/KR101760898B1/en
Publication of KR20170025095A publication Critical patent/KR20170025095A/en
Application granted granted Critical
Publication of KR101760898B1 publication Critical patent/KR101760898B1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • G06Q30/0251Targeted advertisements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06K9/00221
    • G06K9/00369
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • G06Q30/0251Targeted advertisements
    • G06Q30/0261Targeted advertisements based on user location
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • G06Q30/0251Targeted advertisements
    • G06Q30/0268Targeted advertisements at point-of-sale [POS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/20Arrangements for obtaining desired frequency or directional characteristics
    • H04R1/32Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only
    • H04R1/34Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by using a single transducer with sound reflecting, diffracting, directing or guiding means

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Strategic Management (AREA)
  • Finance (AREA)
  • Development Economics (AREA)
  • Accounting & Taxation (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • Economics (AREA)
  • Health & Medical Sciences (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Marketing (AREA)
  • Game Theory and Decision Science (AREA)
  • Otolaryngology (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The present invention discloses a context-aware bi-directional guidance system and a control method thereof. That is, according to the present invention, when an object enters a preset area, the object recognizes the situation and then transmits / receives the object and information (including voice information, sound, etc.) through a directional microphone and a directional speaker installed corresponding to the area, By receiving the information, it is possible to increase the satisfaction of the user by providing the information only to the object requiring the specific information, and to receive the information without additional apparatus.

Description

[0001] Context awareness based interactive guidance system and control method [

The present invention relates to a context-aware interactive guidance system and a control method thereof. More particularly, the present invention relates to a context-aware interactive guidance system and a control method thereof, And more particularly, to a context-aware bidirectional guidance system for transmitting / receiving information (including voice information, sound, etc.) and a control method thereof.

The announcement system is a system that provides various information in a multi-use facility such as a shopping mall, a department store, and a subway.

Such an announcement broadcasting system operates by a one-way communication method and repeatedly provides only preset information. Therefore, bidirectional communication with a customer is difficult, and since it is indiscreetly transmitted to a customer who does not want to receive the information, .

Korean Patent No. 10-1081179 [Title: Method and Apparatus for Adjusting Gain for Automatic Noise Reduction]

SUMMARY OF THE INVENTION It is an object of the present invention to provide a directional microphone and a directional speaker installed in correspondence with a corresponding area when an object enters a predetermined area and to transmit / receive the object and information (including voice information, sound, A guidance system and a control method thereof.

It is another object of the present invention to provide a method and apparatus for recognizing an object and a situation entering a preset area, recognizing / confirming the request request of the object, recognizing / confirming the preferred language of the object, The present invention provides a context-aware two-way guidance system and a control method thereof, which confirm personal information related to an object and perform a bidirectional communication function with the object based on the confirmed personal information.

The context awareness based bidirectional guidance system according to an embodiment of the present invention is a context aware based bidirectional guidance system for delivering guidance information or advertisement information to an object existing in a predetermined area through a directional speaker and a directional microphone, A motion recognition sensor for sensing an object entering the mobile terminal; A camera unit for photographing the object when the object entering the predetermined area is located within the area for a predetermined time or more; A control unit recognizing the foot or body direction of the object in the photographed object, recognizing a language corresponding to the foot or body direction of the recognized object, and recognizing a face of the object; And transmits the information on the recognized face, the single code information generated corresponding to the recognized face, the information about the confirmed language, and the unique identification information of the guide apparatus to the server, A communication unit for receiving at least one of guide information and advertisement information provided in the form of the confirmed language transmitted from the server in response to the information; A directional speaker for directing at least one of the received guide information and advertisement information to an object existing in the area and outputting the received guide information and advertisement information under the control of the control unit; And a directional microphone for receiving voice information output from the object.

As an example related to the present invention, the control unit can identify a language corresponding to the foot direction or the body direction of the recognized object among the preset directional languages.

As an example related to the present invention, the control unit may be configured to remove noise from voice information received from the directional microphones, perform signal processing on the voice information from which the noise is removed, And control the directional speaker to output at least one of the other guidance information and other advertisement information transmitted from the server in response to the transmitted signal-processed audio information.

As an example related to the present invention, when the movement of an object existing in the area is detected through the motion recognition sensor or the camera, the control unit follows the movement of the object, The directional speaker and the direction of the directional microphone.

The context awareness based bidirectional guidance system according to an embodiment of the present invention is a context aware based bidirectional guidance system for delivering guidance information or advertisement information to an object existing in a predetermined area through a directional speaker and a directional microphone, And when the detected object is located within the area for a predetermined time or longer, capturing the object, recognizing the foot direction or the body direction of the object in the captured object, Recognizing a face corresponding to the foot direction or body direction of the object, recognizing the face of the photographed object, comparing the information about the recognized face, the single code information generated corresponding to the recognized face, And the unique identification information of the guide device to the server, Receiving at least one of first guide information and first advertisement information provided in the form of the verified language transmitted from the server in response to information about the extracted face, A directional speaker for directing at least one of the objects to the object in the area; And a control unit for checking the personal information in the form of the confirmed language corresponding to the information about the recognized face transmitted from the guide apparatus from a plurality of pieces of personal information corresponding to the stored face information, And the server for transmitting at least one of the first guide information and the first advertisement information to the guide apparatus.

As an example related to the present invention, the guide device may be configured to receive voice information transmitted from the object through the directional microphone, to remove noise from the voice information, and to process the voice information without noise And transmitting at least one of second guide information and second advertisement information transmitted from the server in response to the transmitted signal processed voice information to the server through the directional speaker It is possible to direct and output an object existing in the area.

As an example related to the present invention, the guide device can control the direction of the directional speaker and the directional microphone so as to correspond to the movement of the object when the movement of the object existing in the area is detected.

A control method of a context-aware interactive guidance system according to an embodiment of the present invention is a control method of a context-aware interactive guidance system for delivering guidance information or advertisement information to an object existing in a predetermined area through a directional speaker and a directional microphone Detecting an object entering the predetermined area based on image information photographed through a camera unit through a control unit; Photographing the object through the camera unit when the sensed object stays in the predetermined area for a predetermined time or longer; Recognizing a foot direction or a body direction of the object in the photographed object through the control unit; Confirming a language corresponding to the foot direction or the body direction of the recognized object through the control unit; Recognizing a face of an object in the photographed object through the control unit; Transmitting information on the recognized face, single code information generated corresponding to the recognized face, information on the confirmed language, and unique identification information of the guide apparatus to the server through the communication unit; Receiving at least one of guide information and advertisement information provided in the form of the confirmed language sent from the server in response to the transmitted information on the recognized face through the communication unit; And controlling the directional speaker to direct at least one of the received guide information and the advertisement information to an object existing in the area and output through the control unit.

Receiving voice information output from an object existing in the area through a directional microphone as an example related to the present invention; Removing noise from the received voice information through the control unit and performing a signal processing process on the voice information from which the noise is removed; Transmitting the signal-processed voice information to the server through the communication unit; Controlling the directional speaker to direct at least one of the other guide information and other advertisement information transmitted from the server in response to the transmitted signal-processed voice information to direct the object present in the area through the control unit ; And controlling the direction of the directional speaker and the directional microphone to correspond to the movement of the object through the control unit when movement of the object existing in the area is detected.

A control method of a context-aware interactive guidance system according to an embodiment of the present invention is a control method of a context-aware interactive guidance system for delivering guidance information or advertisement information to an object existing in a predetermined area through a directional speaker and a directional microphone Detecting, through a guide device, the object entering the predetermined area and photographing the object when the detected object is located within the area for a predetermined time or more; Recognizing a foot direction or a body direction of the object in the photographed object through the guide device; Confirming, through the guide device, a language corresponding to the foot or body direction of the recognized object; Recognizing a face of the photographed object through the guide device; Transmitting information on the recognized face, single code information generated corresponding to the recognized face, information on the confirmed language, and unique identification information of the guide apparatus to the server through the guide apparatus; Confirming information about the recognized face transmitted from the guide apparatus and personal information corresponding to the identified language from a plurality of pieces of personal information corresponding to previously stored face information through the server; Transmitting, via the server, at least one of customized first guide information and first advertisement information corresponding to the identified personal information to the guide apparatus; Directing at least one of the first guide information and the first advertisement information provided in the form of the language transmitted from the server to the object present in the area through the directional speaker through the guide device; Receiving the voice information transmitted from the object through the directional microphone through the guide device, removing noise from the voice information, performing signal processing on the voice information from which the noise is removed, Transmitting the voice information to the server; Transmitting, via the server, at least one of the second guide information and the second advertisement information to the guide apparatus in response to the signal processed voice information; And directing at least one of the second guide information and the second advertisement information through the guide device to direct the object existing in the area through the directional speaker.

In the present invention, when an object enters a preset area, it transmits / receives the object and information (or audio information, sound, etc.) through the directional microphone and directional speaker installed corresponding to the area, It is possible to increase the satisfaction of the user by providing the information only to the corresponding object, and to provide the information without additional device.

In addition, the present invention recognizes and confirms an object and a situation that enter a predetermined area, recognizes / confirms a guidance request of the object, recognizes / confirms a language preferred by the object, It is possible to provide information or advertisements specific to a specific object by confirming the personal information and performing a bidirectional communication function with the object based on the confirmed personal information.

1 is a block diagram showing the configuration of a context-awareness-based bidirectional guidance system according to an embodiment of the present invention.
2 is a block diagram showing a configuration of a guide apparatus according to an embodiment of the present invention.
3 is a flowchart illustrating a control method of the context-aware, bidirectional guidance system according to an embodiment of the present invention.
4 is a view showing a screen of a guide device according to an embodiment of the present invention.

It is noted that the technical terms used in the present invention are used only to describe specific embodiments and are not intended to limit the present invention. In addition, the technical terms used in the present invention should be construed in a sense generally understood by a person having ordinary skill in the art to which the present invention belongs, unless otherwise defined in the present invention, and an overly comprehensive It should not be construed as meaning or overly reduced. In addition, when a technical term used in the present invention is an erroneous technical term that does not accurately express the concept of the present invention, it should be understood that technical terms that can be understood by a person skilled in the art can be properly understood. In addition, the general terms used in the present invention should be interpreted according to a predefined or context, and should not be construed as being excessively reduced.

Furthermore, the singular expressions used in the present invention include plural expressions unless the context clearly dictates otherwise. The term "comprising" or "comprising" or the like in the present invention should not be construed as necessarily including the various elements or steps described in the invention, Or may include additional components or steps.

In addition, terms including ordinals such as first, second, etc. used in the present invention can be used to describe elements, but the elements should not be limited by terms. Terms are used only for the purpose of distinguishing one component from another. For example, without departing from the scope of the present invention, the first component may be referred to as a second component, and similarly, the second component may also be referred to as a first component.

Hereinafter, exemplary embodiments of the present invention will be described in detail with reference to the accompanying drawings, wherein like reference numerals refer to like or similar elements throughout the several views, and redundant description thereof will be omitted.

In the following description, well-known functions or constructions are not described in detail since they would obscure the invention in unnecessary detail. It is to be noted that the accompanying drawings are only for the purpose of facilitating understanding of the present invention, and should not be construed as limiting the scope of the present invention with reference to the accompanying drawings.

1 is a block diagram showing the configuration of a context-awareness-based bidirectional guidance system 10 according to an embodiment of the present invention.

As shown in FIG. 1, the context-aware interactive guidance system 10 comprises a guide device 100 and a server 200. All of the components of the context-awareness interactive guidance system 10 shown in Fig. 1 are not essential components, and the situation awareness-based interactive guidance system 10 is implemented by more components than the components shown in Fig. 1 Directional guidance system 10 may also be implemented by fewer components.

The guide device 100 is installed (or disposed) at a preset location of multiple convenience facilities such as a shopping mall, a department store, a subway, and the like.

In addition, the guide device 100 may be provided with unique location information (or location information corresponding to an area where the guide device 100 is installed) according to a specific installation location in multiple facilities, The server 200 can manage and store information.

2, the guide device 100 includes a camera unit 110, a motion recognition sensor 120, a directional speaker 130, a directional microphone 140, a communication unit 150, a storage unit 160, And a control unit 170. Not all of the components of the guide device 100 shown in Fig. 2 are essential components, and the guide device 100 may be implemented by more components than the components shown in Fig. 2, The guiding device 100 may be implemented.

The camera unit (or the photographing unit) 110 is installed (or arranged) at a predetermined position adjacent to the corresponding area so that an object (or a customer) existing in a predetermined area can be photographed.

In addition, the camera unit 110 processes image frames such as still images or moving images obtained by an image sensor (camera module or camera) in a video communication mode, a shooting mode, a video conference mode, and the like. That is, the camera unit 110 encodes / decodes corresponding image data obtained by the image sensor according to a codec according to each standard.

In addition, the camera unit 110 photographs an object entering (or present) in the predetermined area.

If the object entering the predetermined area is detected through the motion recognition sensor 120 and the object is positioned over a predetermined time in the area, And captures an object existing in the area by control.

In addition, the camera unit 110 provides the processed image frame to the control unit 170.

The image frame processed by the camera unit 110 may be stored in the storage unit 160 or may be transmitted to the server 200 through the communication unit 150 under the control of the control unit 170. [

In addition, the direction (or angle / posture) of the camera unit 110 is controlled by the control unit 170 of the camera unit 110.

The motion recognition sensor (or sensor unit) 120 is installed (or arranged) at a predetermined position for sensing an object entering (or present) in the predetermined area. Here, the motion recognition sensor 120 may be a sensor for recognizing the movement or position of an object, a geomagnetism sensor, an acceleration sensor, a gyro sensor, an inertial sensor, an altimeter Altimeter), a vibration sensor, and the like, and sensors related to motion recognition may be further included.

In addition, the motion recognition sensor 120 detects (or recognizes) an object entering (or present) in the predetermined area.

In addition, the motion recognition sensor 120 detects movement (or movement) of a corresponding object moving (or moving) after entering the corresponding area under the control of the controller 170. In order to detect the movement (or movement) of the object within this area, the motion recognition sensor 120 may be configured in plural.

In addition, the information sensed by the motion recognition sensor 120 is digitized through a digital signal processing process, and the digitized information is transmitted to the controller 170.

The motion recognition sensor 120 may include at least one sensor for sensing at least one of the information in the guide device 100, the environment information surrounding the guide device 100, and the user information. For example, the motion recognition sensor 120 may be a proximity sensor, an illumination sensor, a touch sensor, an acceleration sensor, a magnetic sensor, a gravity sensor G-sensor, gyroscope sensor, motion sensor, inertial sensor, altimeter, vibration sensor, RGB sensor, infrared sensor (infrared sensor) A finger sensor, an ultrasonic sensor, an optical sensor, a camera, a microphone, a battery gauge, an environmental sensor (for example, a barometer, a hygrometer, A thermometer, a radioactivity sensor, a heat sensor, a gas sensor, etc.), a chemical sensor (e.g., an electronic nose, a healthcare sensor, a biometric sensor, etc.). Meanwhile, the guide device 100 disclosed in the present specification can combine and utilize information sensed by at least two of the sensors.

The directional speaker (or supergain speaker) 130 is a speaker system having directivity to radiate sound only in a desired direction, and two or more identical-diameter speakers are arranged on the same plane to form one As shown in Fig.

In addition, the directional speaker 130 may be provided with multiple facilities such as a shopping mall, a department store, and a subway where the guide device 100 is located, so that voice information can be transmitted only to objects existing (or positioned) (Or arranged) at a predetermined specific position of the apparatus.

In addition, the directional speaker 130 outputs guidance information (or audio information) and / or advertisement information under the control of the controller 170.

The directional microphone (or the omnidirectional microphone) 140 is a long-shaped microphone that selectively records only a narrow angle of sound heard in a specific direction, and has a unidirectional and bi-directional microphone.

In addition, the directional microphone 140 may be configured to receive the voice information (or sound) transmitted (outputted) from an object existing (or located) in the predetermined area, (Or arranged) at a predetermined position predetermined in multiple facilities such as shopping mall, department store, subway, and the like.

In addition, the directional microphone 140 transmits voice information transmitted from the object existing in the area to the controller 170. [

The communication unit 150 communicates with at least one external terminal or any internal component via a wired / wireless communication network. At this time, the external arbitrary terminal may include the server 200 and the like. Here, the wireless Internet technology includes a wireless LAN (WLAN), a digital living network alliance (DLNA), a wireless broadband (Wibro), a world interoperability for a microwave (WiMAX), a high speed downlink packet access ), HSUPA (High Speed Uplink Packet Access), IEEE 802.16, Long Term Evolution (LTE), Long Term Evolution-Advanced (LTE-A), Wireless Mobile Broadband Service (WMBS) And the communication unit 150 transmits and receives data according to at least one wireless Internet technology in a range including internet technologies not listed above. In addition, the near field communication technology includes Bluetooth, Radio Frequency Identification (RFID), Infrared Data Association (IrDA), UWB (Ultra Wideband), ZigBee, Near Field Communication (NFC) , Ultra Sound Communication (USC), Visible Light Communication (VLC), Wi-Fi, and Wi-Fi Direct. The wired communication technology may include a power line communication (PLC), a USB communication, an Ethernet, a serial communication, an optical / coaxial cable, and the like.

In addition, the communication unit 150 can transmit information to and from an arbitrary terminal through a universal serial bus (USB).

In addition, the communication unit 150 may be configured to communicate with a plurality of mobile communication systems, such as a mobile communication system, a mobile communication system, a mobile communication system, a mobile communication system, a mobile communication system, (Enhanced Voice-Data Optimized or Enhanced Voice-Data Only), Wideband CDMA (WCDMA), High Speed Downlink Packet Access (HSDPA), High Speed Uplink Packet Access (HSUPA), Long Term Evolution And the server 200 on a mobile communication network constructed according to a long term evolution (RTP), etc.

Also, the communication unit 150 receives the guidance information (or voice information), advertisement information, and the like transmitted from the server 200.

The storage unit 160 stores various user interfaces (UI), a graphical user interface (GUI), and the like.

Also, the storage unit 160 stores data and programs necessary for the guide apparatus 100 to operate.

That is, the storage unit 160 may store a plurality of application programs (application programs), data for operation of the guide device 100, and commands that are driven by the guide device 100. At least some of these applications may be downloaded from an external server via wireless communication. At least some of these application programs may also be present on the guide device 100 from the time of departure for the basic functions of the guide device 100 (e.g., phone call incoming, outgoing, message receiving, origination). The application program is stored in the storage unit 160 and installed on the guide device 100 to be operated to perform the operation (or function) of the guide device 100 under the control of the control unit 170 .

The storage unit 160 may be a flash memory type, a hard disk type, a multimedia card micro type, a card type memory (for example, SD or XD A random access memory (SRAM), a read-only memory (ROM), an electrically erasable programmable read-only memory (EEPROM), a magnetic random access memory And a PROM (Programmable Read-Only Memory). The guide device 100 may operate web storage or operate in association with the web storage to perform the storage function of the storage unit 160 on the Internet.

The storage unit 160 stores guidance information (or voice information), advertisement information, and the like transmitted from the server 200 via the communication unit 150 under the control of the controller 170. [ At this time, the guide information, advertisement information, and the like may be in a state of being produced in a specific language.

The control unit 170 executes the overall control function of the guide device 100. [

Also, the control unit 170 executes the overall control function of the guide device 100 using the programs and data stored in the storage unit 160. [ The controller 170 may include a RAM, a ROM, a CPU, a GPU, and a bus. The RAM, the ROM, the CPU, and the GPU may be connected to each other via a bus. The CPU accesses the storage unit 160 and performs booting using the O / S stored in the storage unit 160. The CPU can perform various operations using various programs, contents, data stored in the storage unit 160, Can be performed.

Also, when an object (or a customer) entering the preset area (for example, a service area) is detected (or recognized) through the camera unit 110 and / or the motion recognition sensor 120, (Or determines) whether the object stays in the preset area for more than a predetermined time (or whether it is located / exists).

That is, when an object entering the area is sensed through the motion recognition sensor 120, the controller 170 determines whether the object entering (or existing / positioned) in the area stays for a predetermined time or longer (Or judges whether it is located or not).

In addition, the controller 170 analyzes the image information obtained in real time from the camera unit 110, and determines whether the object exists in the area based on the analysis result. If it is confirmed that the object is entering the area, the controller 170 determines whether the object entering (or existing / positioned) within the area is within the preset area Make sure you stay.

At this time, if the object is located in a predetermined display area (for example, a footprint pattern, etc.) in the predetermined area, the control unit 170 determines whether the object stays in the predetermined area for the predetermined time or more .

If the object does not stay in the predetermined area for more than a preset time, the controller 170 performs a bidirectional guidance function for receiving the guidance information or transmitting the inquiry information And switches to the standby state.

If the object remains in the predetermined area for a predetermined time or more, the controller 170 displays the object through the camera unit 110 included in the guide device 100 I shoot.

That is, when the object located (or existing) in the area over the predetermined time is detected (or recognized) through the camera unit 110 and / or the motion recognition sensor 120, the control unit 170 May control the camera unit 110 to photograph a corresponding object entering (or present / located) within the area.

When the object located in (or existing in) a predetermined display area in the area is detected (or recognized) through the camera unit 110 and / or the motion recognition sensor 120, the control unit 170 May control the camera unit 110 to photograph a corresponding object entering (or present / located) within the area. Here, the display area may be a shape of a footprint, a shape of a circle, a polygon, or the like, which is displayed at a preset location in the area.

In addition, the controller 170 confirms (or recognizes) the foot direction or the body direction of the object from the object photographed through the camera unit 110 (or image information including the object).

In addition, the controller 170 determines which language is selected among the directional languages (or directional directional guidance languages) that are set in the foot direction of the identified (or recognized) object or the body direction of the object.

That is, the controller 170 identifies a language (or a guidance language) corresponding to the foot direction or the body direction of the confirmed object from a plurality of preset guidance languages (or guidance language for each direction). At this time, the controller 170 confirms (or recognizes) a part of the body of the object located in one of the plurality of areas for selecting a language in the predetermined area, (Or recognizes) the language corresponding to the region.

In this way, the controller 170 can confirm a language (or a guidance language / language type) preferred by the object (or to be guided) according to the foot direction, body direction, etc. of the object.

In this way, the controller 170 recognizes the situation and can perform a control function according to the recognized situation.

In addition, the controller 170 recognizes the face of the object from the object photographed through the camera 110 (or image information including the object). At this time, the controller 170 can recognize the face of the object based on known face recognition techniques, and the description of the face recognition techniques will be omitted in the present invention.

In addition, the control unit 170 may generate a single code generated corresponding to the recognized face (including the feature information (or feature point), for example), the recognized face (or the information about the recognized face) Information about the identified language (or a guide language / language type), unique identification information of the guide device 100 (or position information corresponding to the area where the guide device 100 is installed), and the like 150 to the server 200.

In addition, the control unit 170 may transmit guide information (or voice information), advertisement information, and the like corresponding to the personal information related to the object transmitted from the server 200 in response to the transmitted information on the recognized face, And controls the communication unit 150 to receive the data. At this time, the guide information, advertisement information, and the like corresponding to the personal information may be in a state of being produced in a language corresponding to the confirmed language.

Also, the controller 170 outputs the received guide information (or voice information), advertisement information, and the like only to the corresponding object (or customer) through the directional speaker 130.

In addition, the controller 170 performs a preset noise reduction function on the voice information of the object received (or input) through the directional microphone 140, and outputs the noise-removed voice information to a preset signal And transmits the signal-processed voice information to the server 200 through the communication unit 150. [0033] FIG.

The controller 170 performs bidirectional voice information transmission / reception with the server 200 through the directional speaker 130 and / or the directional microphone 140. [

In addition, when movement (or movement) of a corresponding object located in the predetermined area is detected through the camera unit 110 and / or the motion recognition sensor 120 (or when the object moves within the corresponding area The controller 170 follows the movement (or movement) of the object based on information sensed through the camera unit 110 and / or the motion recognition sensor 120.

That is, the control unit 170 analyzes the movement of the object in the image information photographed through the camera unit 110. When the movement of the object is detected in the image information captured through the camera unit 110, the control unit 170 controls the camera unit 110 to display the image information further (or in real time) Analyze the movement of the object and confirm movement (or movement) of the object.

In addition, the controller 170 analyzes the movement of the object in the area sensed through the motion recognition sensor 120. [ When the movement of the object within the area sensed by the motion recognition sensor 120 is sensed, the control unit 170 determines whether the object in the information sensed further (or in real time) by the motion recognition sensor 120 And confirms the movement of the object.

The controller 170 controls the direction (or angle / posture) of the directional speaker 130 and the directional microphone 140 based on the movement of the object (or movement / movement of the object) (Or the movement of)

That is, the controller 170 follows the movement of the object and controls the direction (or angle / posture) of the directional speaker 130 and the directional microphone 140 to correspond to the movement of the object.

At this time, when the object moves within the predetermined area or when the object moves within the predetermined area (for example, in the area of 1.5 meters in radius with 0.5 meters added to the area of the predetermined radius of 1 meter) The controller 170 follows the movement of the object and controls the direction (or angle) of the directional speaker 130 and the directional microphone 140 to correspond to the movement of the following object.

When the object is out of the effective area including the preset area, the controller 170 stops following the movement of the object, and the direction of the directional speaker 130 and the directional microphone 140 Angle) to a preset initial position (or direction / angle). Here, the predetermined initial position of the directional speaker 130 and the directional microphone 140 may be a position that directs the predetermined area.

When the object entering the predetermined area is not detected through the camera unit 110 and / or the motion recognition sensor 120, the control unit 170 controls the directional speaker 130, And outputs the other announcement broadcasts transmitted from the terminal 200.

In this way, the control unit 170 recognizes the presence or absence of the object (or the customer) in the inbound area, which is the preset area, and outputs different announcement broadcasts according to the presence of the customer in the area have.

In addition, the controller 170 outputs another predetermined guide information through the directional speaker 130 during the waiting time until the customer enters the corresponding area and performs communication with the actual server 200 It is possible.

The guide device 100 according to the present invention displays various contents such as various menu screens using the user interface and / or graphical user interface stored in the storage unit 160 under the control of the controller 170 (Not shown). Here, the content displayed on the display unit includes various text or image data (including various information data) and a menu screen including data such as an icon, a list menu, and a combo box. Also, the display unit may be a touch screen. At this time, a touch sensor for sensing the user's touch gesture may be included. The touch sensor may be one of various types such as an electrostatic type, a pressure sensitive type, a piezoelectric type, and the like. In the case of the electrostatic type, the touch coordinates are calculated by sensing the minute electricity that is excited by the user's body when a part of the user's body is touched on the touch screen surface by using the dielectric coated on the surface of the touch screen. In the case of the pressure sensitive type, two electrode plates are built in the touch screen. When the user touches the screen, the upper and lower electrode plates of the touched position contact each other and current flows. In addition, the user device may support the pen input function, in which case the gesture of the user utilizing an input means such as a pen, which is not part of the user's body, may be sensed. By way of example, if the input means is a stylus pen containing a coil therein, the user device may comprise a magnetic field sensing sensor for sensing a magnetic field that is varied by the coil inside the stylus pen. In this case, not only the user's touch gesture but also the user's proximity gesture such as hovering can be detected.

The display unit may be a liquid crystal display (LCD), a thin film transistor liquid crystal display (TFT LCD), an organic light-emitting diode (OLED), a flexible display ), A three-dimensional display (3D display), an electronic ink display (e-ink display), and an LED (Light Emitting Diode), and may include a driving circuit and a backlight unit have.

The display unit displays the operation state of the guide device 100, the image information photographed through the camera unit 110 (or image information including the object), the server 200, and advertisement information.

The server 200 communicates with the guide device 100 and the like.

The server 200 may include components such as a camera unit, a motion recognition sensor, a directional speaker, a directional microphone, a communication unit, a storage unit, and a control unit, which are applied to the guide device 100.

The server 200 may also store information on the recognized face (for example, including feature information (or feature points), etc.) transmitted from the guide device 100, the recognized face (or the recognized face Information about the confirmed language (or the guidance language / language type), unique identification information of the guide device 100 (or corresponding to the area where the guide device 100 is installed) And the like).

Also, the server 200 may store personal information corresponding to the received information on the recognized face (or the received single code information) from a plurality of pieces of personal information according to previously stored face information (or single code information) Check. Here, the personal information includes name, address, sex, age, previous consultation contents, and the like. At this time, the server 200 converts the personal information into a language based on the information about the received language.

Also, the server 200 confirms the personal information corresponding to the received single code information from a plurality of pieces of personal information corresponding to the single code information stored in advance.

In addition, the server 200 performs a bidirectional communication function with the sensed object based on the confirmed personal information by the relay function of the guide device 100.

That is, the server 200 transmits the guide information (or voice information), advertisement information, and the like corresponding to the personal information to the guide device 100 based on the confirmed personal information. At this time, the guide apparatus 100 transmits guidance information (or voice information) and / or advertisement information transmitted from the server 200 to the guide speaker 100 through the directional speaker 130 included in the guide apparatus 100, It is possible to output only to the area. At this time, the server 200 may provide the guide information and / or advertisement information to the object through the guide device 100 in a desired language of the object based on the information about the received language .

In addition, the server 200 receives voice information of a corresponding object transmitted through the directional microphone 140 included in the guide device 100.

Also, the server 200 transmits the guide information (or voice information) corresponding to the received voice information, the advertisement information, and the like to the guide device 100. At this time, the guide apparatus 100 transmits guidance information (or voice information) and / or advertisement information transmitted from the server 200 to the guide speaker 100 through the directional speaker 130 included in the guide apparatus 100, It is possible to output only to the area.

In the embodiment of the present invention, the functions of the guide device 100 and the server 200 are separately described. However, the present invention is not limited thereto, and all the functions of the server 200 may be performed by the guide device 100 .

That is, the guide device 100 and the server 200 may be implemented as a single device.

In this way, when an object enters a preset area, the object and information (or audio information, sound, etc.) can be transmitted / received through the directional microphone and the directional speaker provided corresponding to the area.

In this manner, it is possible to recognize an object entering a preset area, recognize / confirm the guidance request of the object, recognize / confirm the language preferred by the object, and store personal information And can perform a bidirectional communication function with the object based on the confirmed personal information.

Hereinafter, a method of controlling the context-aware, bidirectional guidance system according to the present invention will be described in detail with reference to FIG. 1 to FIG.

3 is a flowchart illustrating a control method of the context-aware, bidirectional guidance system according to an embodiment of the present invention.

First, when an object (or a customer) entering into a preset area (for example, a service area) is detected (or recognized) by the guide device 100, the guide device 100 displays the object (Or judges whether or not it exists or remains) for a predetermined time or more.

At this time, when the object is located in a predetermined display area (for example, a footprint pattern, etc.) in the predetermined area, the guiding device 100 determines whether the object stays in the predetermined area for more than the predetermined time You can also check if it is.

4, a customer 430 standing on a footprint pattern 420 displayed at the center of a predetermined one meter radius area 410 in the shopping mall is detected through the guide device 100 The guide device 100 confirms whether the customer 430 is located within the corresponding one meter area 410 (or the footprint pattern 420) for more than 3 seconds (S310).

If the object does not stay in the predetermined area for a predetermined time or longer, the guiding device 100 performs a bidirectional guiding function of receiving the guiding information or transmitting the inquiry information It is judged not to be in the standby state.

For example, when the corresponding customer 430 shown in FIG. 4 is not located within the corresponding one-meter region 410 (or the footprint pattern 420) for more than three seconds, which is a preset time, the guide apparatus 100 (S320).

When the object remains in the predetermined area for a predetermined time or longer, the guiding device 100 displays the corresponding object (e.g., the object) through the camera unit 110 included in the guiding device 100, .

For example, when the corresponding customer 430 shown in FIG. 4 is located within the corresponding one meter area 410 (or the footprint pattern 420) for more than 3 seconds, which is a predetermined time, The customer is photographed and image information including the customer is obtained (S330).

Then, the guiding device 100 confirms (or recognizes) the foot direction or the body direction of the object in the image information including the photographed object.

In addition, the guide device 100 determines which language among the directional languages (or directional guidance languages) corresponds to the foot direction of the identified (or recognized) object or the body direction of the object.

That is, the guide device 100 confirms a language (or a guidance language) corresponding to the foot direction or the body direction of the identified object from a plurality of preset guidance languages. At this time, the guide device 100 confirms (or recognizes) a part of the body of the object located in one of the plurality of areas for language selection in the predetermined area, (Or recognizes) the language corresponding to the area of the area.

For example, as shown in FIG. 4, the guide apparatus 100 may be configured such that the direction of the object is set in a predetermined directional language (for example, English in 10 o'clock direction (or 11 o'clock direction) , The Chinese corresponding to the 12 o'clock direction 440 which is the body direction of the confirmed object 430 among the Japanese at the 12 o'clock direction (or the front direction) and the Japanese at the 2 o'clock (or 1 o'clock) direction.

In this way, the guide device 100 can confirm the language (or the guide language / language type) that the object prefers (or desires to be guided) by the foot direction, the body direction, etc. of the object at step S340.

Then, the guiding device 100 recognizes the face of the object in the photographed object.

In addition, the guide device 100 may be configured to display information on the recognized face (e.g., including feature information (or minutiae)), a single generated face corresponding to the recognized face (or information on the recognized face) Code information, information on the identified language (or a guide language / language type), unique identification information of the guide device 100 (or location information corresponding to an area where the guide device 100 is installed) 200).

For example, the guide device 100 recognizes the face of the customer in the photographed image information through a previously installed face recognition program. Also, the guide device 100 generates single code information such as '01010001' corresponding to the recognized face. The guide device 100 may further include information on the recognized face, the generated single code information, information on the identified language (e.g., Chinese), unique identification information of the guide device 100, and the like To the server 200 (S350).

Thereafter, the server 200 displays information on the recognized face (for example, including feature information (or feature points), etc.) transmitted from the guide apparatus 100, the recognized face (or the recognized face Information about the confirmed language (or the guidance language / language type), unique identification information of the guide device 100 (or corresponding to the area where the guide device 100 is installed) And the like).

Also, the server 200 may store personal information corresponding to the received information on the recognized face (or the received single code information) from a plurality of pieces of personal information according to previously stored face information (or single code information) Check. Here, the personal information includes name, address, sex, age, previous consultation contents, and the like. At this time, the server 200 converts the personal information into a language based on the information about the received language.

For example, the server 200 confirms first personal information corresponding to the recognized face information among a plurality of previously stored personal information (S360).

Thereafter, the server 200 performs a bidirectional communication function with the sensed object based on the identified personal information by the relay function of the guide device 100.

That is, the server 200 transmits the guide information (or voice information) and / or the advertisement information corresponding to the personal information to the guide device 100 based on the confirmed personal information. The guide device 100 may transmit the guide information (or audio information) and / or the advertisement information transmitted from the server 200 to the guide speaker 100 through the directional speaker 130 included in the guide device 100, Output only to the area. At this time, the server 200 may provide the guide information and / or advertisement information to the object through the guide device 100 in a desired language of the object based on the information about the received language .

In addition, the guide device 100 receives voice information output from the corresponding object through the directional microphone 140 included in the guide device 100, performs a predetermined signal processing function on the received voice information And transmits the signal-processed voice information to the server 200. In addition, the server 200 transmits the guidance information and / or the advertisement information corresponding to the signal-processed voice information to the guide device 100. The guide device 100 may transmit the guide information (or audio information) and / or the advertisement information transmitted from the server 200 to the guide speaker 100 through the directional speaker 130 included in the guide device 100, Output only to the area.

As described above, bidirectional voice information transmitting / receiving function between the server 200 and the object can be performed through the directional speaker 130 and / or the directional microphone 140 included in the guide device 100.

For example, the server 200 may generate first advertisement information (e.g., Chinese text) added to Chinese characters (e.g., Chinese text) that is customized for the customer of the recognized face based on the first personal information Information about the advertisement shop) to the guide device 100. [ Also, the guide device 100 outputs the first advertisement information transmitted from the server 200 to the area where the corresponding customer is located through the directional speaker 130.

In another example, the guide device 100 receives Chinese-style voice information input from the customer through the directional microphone 140, removes noise (or noise) included in the received voice information, Performs a predetermined signal processing process, and transmits the signal processed voice information to the server (200). The server 200 then transmits to the guide device 100 the first guide information in Chinese form corresponding to the voice information of the customer transmitted from the guide device 100. [ In addition, the guide device 100 outputs the first guide information corresponding to the voice information transmitted from the server 200 to the area where the corresponding customer is located through the directional speaker 130 (S370).

Thereafter, when the corresponding object being sensed moves (or moves), the guiding device 100 follows the movement (or movement) of the corresponding object.

In addition, the guide device 100 may control the direction (or angle / posture) of the directional speaker 130 and the directional microphone 140 to correspond to the object (or the movement / movement of the object) .

For example, when the customer is moving, the camera unit 110 and / or the motion recognition sensor 120 included in the guide device 100 follow the movement of the customer. Also, the guide device 100 controls the direction (or angle / posture) of the directional speaker 130 and the directional microphone 140 to correspond to the movement of the customer based on the movement of the following object.

At this time, when the object moves within the predetermined area or when the object moves within the predetermined area (for example, in the area of 1.5 meters in radius with 0.5 meters added to the area of the predetermined radius of 1 meter) The guide device 100 follows the movement of the object and controls the direction (or angle) of the directional speaker 130 and the directional microphone 140 to correspond to the movement of the following object.

When the object is out of the effective area including the predetermined area, the guiding device 100 stops following the movement of the object, and the direction of the directional speaker 130 and the directional microphone 140 Or angle) to a preset initial position (or direction / angle) (S380).

The two-way guidance system and the control method thereof according to the embodiment of the present invention can be written in a computer program, and the codes and code segments constituting the computer program can be easily deduced by a computer programmer in the field. In addition, the computer program is stored in a computer readable medium, readable and executed by a computer, a guide device, a server, etc. according to an embodiment of the present invention, thereby enabling a two-way guidance system and a control method thereof Can be implemented.

The information storage medium includes a magnetic recording medium, an optical recording medium, and a carrier wave medium. The computer program embodying the two-way guide system and the control method thereof according to the embodiment of the present invention can be stored and installed in a built-in memory such as a guide device, a server, and the like. Alternatively, the bidirectional guidance system according to the embodiment of the present invention and an external memory such as a smart card storing and installing a computer program implementing the control method may be mounted on a guide device, a server, or the like through an interface.

As described above, in the embodiment of the present invention, when an object enters a preset area, the object and information (or voice information, sound, etc.) are transmitted through a directional microphone and a directional speaker installed corresponding to the area / Receive the information, and increase the satisfaction of the user by providing the information only to the object requiring the specific information, and receive the information without additional apparatus.

In addition, as described above, the embodiment of the present invention recognizes an object entering a preset area, recognizes / confirms the guidance request of the object, recognizes / confirms the language preferred by the object, It is possible to confirm personal information related to the object and to perform bidirectional communication function with the object based on the confirmed personal information to provide information or advertisement specific to the specific object.

It will be apparent to those skilled in the art that various modifications and variations can be made in the present invention without departing from the spirit or essential characteristics thereof. Therefore, the embodiments disclosed in the present invention are intended to illustrate rather than limit the scope of the present invention, and the scope of the technical idea of the present invention is not limited by these embodiments. The scope of protection of the present invention should be construed according to the following claims, and all technical ideas within the scope of equivalents should be construed as falling within the scope of the present invention.

In the present invention, when an object enters a preset area, it transmits / receives the object and information (or audio information, sound, etc.) through the directional microphone and directional speaker installed corresponding to the area, And can be widely used in object recognition field, guidance information providing field, terminal field, and server field, for example, in order to increase the satisfaction of the user by providing the corresponding information only to the corresponding object.

10: Situation awareness-based interactive guidance system
100: guide device 200: server
300: a server 110:
120: motion recognition sensor 130: directional speaker
140: directional microphone 150:
160: Storage unit 170: Control unit

Claims (10)

A context-aware bidirectional guidance system for delivering guidance information or advertisement information to an object existing within a predetermined area through a directional speaker and a directional microphone,
An operation recognition sensor for sensing an object entering the predetermined area;
A camera unit for photographing the object when the object entering the predetermined area is located within the area for a predetermined time or more;
Recognizing the foot direction or the body direction of the object in the photographed object and recognizing a language corresponding to the foot direction or the body direction of the recognized object among preset directional languages in order to easily confirm the preferred language of the object And recognizing a face of the object;
Information on the recognized face, single code information generated corresponding to the recognized face, information on the confirmed language, and unique identification information of the guide device to the server, and transmits information on the transmitted recognized face A communication unit for receiving at least one of guide information and advertisement information provided in the form of the confirmed language transmitted from the server in response to the request;
Wherein at least one of the guide information and the advertisement information is directed and output to an object existing in the area in order to provide guide information and advertisement information specific to the object corresponding to the information on the face under the control of the controller Directional speakers; And
And a directional microphone for receiving voice information output from the object,
The guide information and the advertisement information may be,
Wherein the face recognition unit confirms the personal information corresponding to the recognized face or the single code information generated corresponding to the recognized face from a plurality of pieces of personal information corresponding to the face information stored in advance in the server, And the guidance information and advertisement information specific to the verified personal information produced.
The method according to claim 1,
Wherein,
A part of a body of the object located in any one of a plurality of areas for selecting a language in a preset area is confirmed and a language corresponding to any one of the plurality of confirmed areas is confirmed Context - aware, two - way guidance system.
The method according to claim 1,
Wherein,
The signal processing unit removes noise from the speech information received from the directional microphones, performs signal processing on the noise-removed speech information, transmits the processed speech information to the server through the communication unit, Wherein the directional speaker controls the directional speaker to output at least one of other guidance information and other advertisement information transmitted from the server in response to the signaled voice information.
The method according to claim 1,
Wherein,
Wherein when the movement of the object existing in the region is detected through the motion recognition sensor or the camera, the movement of the object is controlled and the direction of the directional speaker and the directional microphone are controlled to correspond to the movement of the object Wherein the context-aware, two-way guidance system is based on context-awareness.
A context-aware bidirectional guidance system for delivering guidance information or advertisement information to an object existing within a predetermined area through a directional speaker and a directional microphone,
Detecting the object entering the predetermined area and photographing the object when the detected object is located within the area for a predetermined time or more and recognizing the foot direction or the body direction of the object in the taken object Identifying a language corresponding to the foot direction or the body direction of the recognized object among the preset directional languages to easily confirm the language preferred by the object, recognizing the face of the taken object, And transmits the information on the face, the single code information generated corresponding to the recognized face, the information on the confirmed language, and the unique identification information of the guide device to the server, and in response to the transmitted information on the recognized face The first guidance information and the first advertisement information provided in the form of the verified language transmitted from the server, At least one of the received first guide information and first advertisement information is provided to the region through the directional speaker to provide guide information and advertisement information specific to the object corresponding to information on the face, A guide device for directing and outputting an object existing in the terminal; And
Wherein the control unit checks personal information corresponding to the recognized face transmitted from the guide apparatus or the single code information generated corresponding to the recognized face from a plurality of pieces of personal information corresponding to the stored face information, And a server for transmitting at least one of customized first guide information and first advertisement information corresponding to the identified personal information made in the form of a personal identification information to the guide apparatus.
6. The method of claim 5,
The guide device comprises:
A part of a body of the object located in any one of a plurality of areas for selecting a language in a preset area is confirmed and a language corresponding to any one of the plurality of confirmed areas is confirmed Context - aware, two - way guidance system.
6. The method of claim 5,
The guide device comprises:
And controls the direction of the directional speaker and the directional microphone to correspond to the movement of the object when the movement of the object existing in the area is detected.
A method of controlling a bidirectional guidance system based on context awareness for delivering guidance information or advertisement information to an object existing within a predetermined area through a directional speaker and a directional microphone,
Sensing an object entering the predetermined area based on image information photographed through a camera unit through a control unit;
Photographing the object through the camera unit when the sensed object stays in the predetermined area for a predetermined time or longer;
Recognizing a foot direction or a body direction of the object in the photographed object through the control unit;
Confirming a language corresponding to the foot direction or the body direction of the recognized object among the preset directional languages to easily confirm the preferred language of the object through the control unit;
Recognizing a face of an object in the photographed object through the control unit;
Transmitting information on the recognized face, single code information generated corresponding to the recognized face, information on the confirmed language, and unique identification information of the guide device to the server through the communication unit;
Receiving at least one of guide information and advertisement information provided in the form of the confirmed language sent from the server in response to the transmitted information on the recognized face through the communication unit; And
Wherein at least one of the received guidance information and the advertisement information is directed to an object existing in the area in order to provide guidance information and advertisement information specific to the object corresponding to the information on the face through the control unit And controlling the directional speaker,
The guide information and the advertisement information may be,
Wherein the face recognition unit confirms the personal information corresponding to the recognized face or the single code information generated corresponding to the recognized face from a plurality of pieces of personal information corresponding to the face information stored in advance in the server, And the guidance information and the advertisement information specific to the verified personal information produced.
9. The method of claim 8,
Receiving voice information output from an object existing in the area through a directional microphone;
Removing noise from the received voice information through the control unit and performing a signal processing process on the voice information from which the noise is removed;
Transmitting the signal-processed voice information to the server through the communication unit;
Controlling the directional speaker to direct at least one of the other guide information and other advertisement information transmitted from the server in response to the transmitted signal-processed voice information to direct the object present in the area through the control unit ; And
Further comprising controlling, via the control unit, the direction of the directional speaker and the directional microphone to correspond to movement of the object when movement of an object existing in the area is sensed. Control method of guidance system.
A method of controlling a bidirectional guidance system based on context awareness for delivering guidance information or advertisement information to an object existing within a predetermined area through a directional speaker and a directional microphone,
Sensing the object entering the predetermined area through a guide device and photographing the object when the detected object is located within the area for a predetermined time or more;
Recognizing a foot direction or a body direction of the object in the photographed object through the guide device;
Recognizing a face of the photographed object through the guide device;
Confirming a language corresponding to a foot direction or a body direction of the recognized object among preset directional languages to easily confirm the language preferred by the object through the guide device;
Transmitting information on the recognized face, single code information generated corresponding to the recognized face, information on the confirmed language, and unique identification information of the guide apparatus to the server through the guide apparatus;
The personal information corresponding to the recognized face transmitted from the guide apparatus or the single code information generated corresponding to the recognized face is confirmed from among a plurality of pieces of personal information corresponding to the stored face information through the server step;
Transmitting, through the server, at least one of customized first guide information and first advertisement information corresponding to the identified personal information produced in the form of the identified language to the guide apparatus;
Directing at least one of the first guide information and the first advertisement information provided in the form of the language transmitted from the server to the object present in the area through the directional speaker through the guide device;
Receiving the voice information transmitted from the object through the directional microphone through the guide device, removing noise from the voice information, performing signal processing on the voice information from which the noise is removed, Transmitting the voice information to the server;
Transmitting, via the server, at least one of the second guide information and the second advertisement information to the guide apparatus in response to the signal processed voice information; And
And outputting at least one of the second guide information and the second advertisement information through the guide device by directing an object existing in the area through the directional speaker to output.
KR1020150121029A 2015-08-27 2015-08-27 Context awareness based interactive guidance system and control method thereof KR101760898B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020150121029A KR101760898B1 (en) 2015-08-27 2015-08-27 Context awareness based interactive guidance system and control method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020150121029A KR101760898B1 (en) 2015-08-27 2015-08-27 Context awareness based interactive guidance system and control method thereof

Publications (2)

Publication Number Publication Date
KR20170025095A KR20170025095A (en) 2017-03-08
KR101760898B1 true KR101760898B1 (en) 2017-07-24

Family

ID=58404707

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020150121029A KR101760898B1 (en) 2015-08-27 2015-08-27 Context awareness based interactive guidance system and control method thereof

Country Status (1)

Country Link
KR (1) KR101760898B1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20210099245A (en) * 2020-02-03 2021-08-12 주식회사 지앤 A satisfaction survey system through motion recognition in field space

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10311872B2 (en) * 2017-07-25 2019-06-04 Google Llc Utterance classifier
KR101949258B1 (en) * 2017-11-03 2019-02-18 주식회사 브이알미디어 Payment System based on KIOSK
KR102113857B1 (en) 2020-01-16 2020-05-21 최영철 A method for construction provider including consultin
KR102113861B1 (en) 2020-01-16 2020-06-02 최영철 A method for construction provider including consultin
KR102597043B1 (en) * 2021-07-01 2023-11-01 이황기 Ai-based camera apparatus and camera system

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010113691A (en) * 2008-11-10 2010-05-20 Nec Corp Behavioral analysis device and method, and program

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101081179B1 (en) 2009-10-15 2011-11-07 세종대학교산학협력단 Gain control method for automatic noise diminution and apparatus thereof

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010113691A (en) * 2008-11-10 2010-05-20 Nec Corp Behavioral analysis device and method, and program

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20210099245A (en) * 2020-02-03 2021-08-12 주식회사 지앤 A satisfaction survey system through motion recognition in field space
KR102318661B1 (en) * 2020-02-03 2021-11-03 주식회사 지앤 A satisfaction survey system through motion recognition in field space

Also Published As

Publication number Publication date
KR20170025095A (en) 2017-03-08

Similar Documents

Publication Publication Date Title
KR101760898B1 (en) Context awareness based interactive guidance system and control method thereof
KR101591835B1 (en) Mobile terminal and method for controlling the same
KR102248474B1 (en) Voice command providing method and apparatus
KR101688168B1 (en) Mobile terminal and method for controlling the same
US20170243578A1 (en) Voice processing method and device
US10402625B2 (en) Intelligent electronic device and method of operating the same
EP2701277A2 (en) Method and apparatus for wireless charging an electronic device
KR20180068127A (en) Mobile terminal and method for controlling the same
US20160133257A1 (en) Method for displaying text and electronic device thereof
CN105323372A (en) Mobile terminal and method for controlling the same
KR102055677B1 (en) Mobile robot and method for controlling the same
KR20170082383A (en) Mobile terminal and method for controlling the same
KR101764267B1 (en) System of displaying phone number for vehicle and control method thereof
KR101641424B1 (en) Terminal and operating method thereof
US20170251138A1 (en) Video call method and device
CN105468261A (en) Mobile terminal and controlling method thereof
CN108460599B (en) Mobile payment method and mobile terminal
KR101657377B1 (en) Portable terminal, case for portable terminal and iris recognition method thereof
US20210200189A1 (en) Method for determining movement of electronic device and electronic device using same
KR20180106731A (en) Mobile terminal having artificial intelligence agent
CN108596600B (en) Information processing method and terminal
KR101516998B1 (en) Method and apparatus for sharing location information among mobile devices
CN107609446B (en) Code pattern recognition method, terminal and computer readable storage medium
KR101843660B1 (en) Payment method for transportation fee by hce type using mobile terminal
KR20170039464A (en) User equipment, service providing device, lighting apparatus, payment system comprising the same, control method thereof and computer readable medium having computer program recorded thereon

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
E90F Notification of reason for final refusal
E701 Decision to grant or registration of patent right
GRNT Written decision to grant