CN110597077B - Method and system for realizing intelligent scene switching based on indoor positioning - Google Patents

Method and system for realizing intelligent scene switching based on indoor positioning Download PDF

Info

Publication number
CN110597077B
CN110597077B CN201910912256.2A CN201910912256A CN110597077B CN 110597077 B CN110597077 B CN 110597077B CN 201910912256 A CN201910912256 A CN 201910912256A CN 110597077 B CN110597077 B CN 110597077B
Authority
CN
China
Prior art keywords
positioning
module
scene
range
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910912256.2A
Other languages
Chinese (zh)
Other versions
CN110597077A (en
Inventor
孟凡靖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dilu Technology Co Ltd
Original Assignee
Dilu Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dilu Technology Co Ltd filed Critical Dilu Technology Co Ltd
Priority to CN201910912256.2A priority Critical patent/CN110597077B/en
Publication of CN110597077A publication Critical patent/CN110597077A/en
Application granted granted Critical
Publication of CN110597077B publication Critical patent/CN110597077B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B15/00Systems controlled by a computer
    • G05B15/02Systems controlled by a computer electric
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/418Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS] or computer integrated manufacturing [CIM]
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/20Pc systems
    • G05B2219/26Pc applications
    • G05B2219/2642Domotique, domestic, home control, automation, smart house

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Manufacturing & Machinery (AREA)
  • Quality & Reliability (AREA)
  • Measurement Of Velocity Or Position Using Acoustic Or Ultrasonic Waves (AREA)

Abstract

The invention discloses a method and a system for realizing scene intelligent switching based on indoor positioning, which are used for determining the range of an area to be positioned in an indoor space; respectively and correspondingly installing infrared sensing receiving equipment and four microphone arrays; the infrared sensing receiving equipment determines the only small-range positioning area; accurately positioning with the four microphone arrays; the auxiliary positioning module carries out identity judgment on the movable object; and the scene switching module completes the scene switching operation. The invention has the beneficial effects that: firstly, the safety and the concealment are better; secondly, comprehensive positioning can be carried out according to the sound and the body temperature of the action of the active person or the multiple dimensions of being captured to the image in a certain range in the action process; and thirdly, the accurate positioning of the moving person in the moving process can be well assisted and detected in the indoor environment, and the confirmation degree for distinguishing the specific identity of the moving person is obviously improved.

Description

Method and system for realizing intelligent scene switching based on indoor positioning
Technical Field
The invention relates to the technical field of intelligent home furnishing, in particular to a method for realizing intelligent scene switching based on indoor positioning and a system for realizing intelligent scene switching based on indoor positioning.
Background
In recent years, when satellite positioning cannot be used in an indoor environment, an indoor positioning technology is used as auxiliary positioning of satellite positioning, and the problems that a satellite signal is weak when reaching the ground and cannot penetrate through a building are solved. And finally, positioning the current position of the object. The indoor positioning means that position positioning is realized in an indoor environment, and a set of indoor position positioning system is formed by mainly integrating multiple technologies such as wireless communication, base station positioning or inertial navigation positioning and the like, so that position monitoring of personnel, objects and the like in an indoor space is realized. The relatively mature target positioning technology mainly adopts radar and sonar technologies, is usually used in the military field, mainly aims at active positioning of a radiation source, and has poor concealment and safety; the current microphone array is mainly used for sound source separation positioning, is a passive target positioning and has good safety and concealment. The prior art has poor positioning precision and poor safety and concealment, and a better solution is urgently needed to realize reliable positioning.
Disclosure of Invention
This section is for the purpose of summarizing some aspects of embodiments of the invention and to briefly introduce some preferred embodiments. In this section, as well as in the abstract and the title of the invention of this application, simplifications or omissions may be made to avoid obscuring the purpose of the section, the abstract and the title, and such simplifications or omissions are not intended to limit the scope of the invention.
The present invention has been made in view of the above-mentioned conventional problems.
Therefore, one technical problem solved by the present invention is: the method is used for realizing intelligent scene switching by accurately positioning an indoor moving object.
In order to solve the technical problems, the invention provides the following technical scheme: a method for realizing scene intelligent switching based on indoor positioning determines the range of an area to be positioned in an indoor space and divides the area to be positioned into a plurality of small-range positioning areas; correspondingly installing infrared sensing receiving equipment and a group of four microphone arrays positioned on four different positions in space in each small-range positioning area respectively; the infrared sensing receiving equipment detects the body surface temperature of the moving object in real time and determines the only small-range positioning area where the moving object is located; according to the determined only small-range positioning area, accurately positioning the position of the moving object by using the four corresponding microphone arrays in the current area; the auxiliary positioning module is combined with the precise positioning to carry out identity judgment on the moving object; and the scene switching module accurately completes the scene switching operation for each moving object according to the identity of the moving object.
As a preferred scheme of the method for realizing intelligent scene switching based on indoor positioning, the method comprises the following steps: each small-range positioning area is a detection range corresponding to the infrared sensing receiving equipment in each area, and the detection range of each small-range positioning area covers the range of the area to be positioned.
As a preferred scheme of the method for realizing intelligent scene switching based on indoor positioning, the method comprises the following steps: the four-microphone array is an array formed by arranging a group of omnidirectional microphones positioned at four different positions in space of the small-range positioning area according to a certain shape rule, and is used for accurately positioning according to sound data generated by the moving object in the moving process.
As a preferred scheme of the method for realizing intelligent scene switching based on indoor positioning, the method comprises the following steps: the four microphone arrays are devices for carrying out space sampling on space propagation sound signals, and further comprise near field models and far field models which are used for dividing the arrays into a linear array, a planar array or a volume array according to the distance between a sound source emitted by the moving object and the microphone arrays, and the linear array, the planar array or the volume array can be divided according to the topological structure of the microphone arrays.
As a preferred scheme of the method for realizing intelligent scene switching based on indoor positioning, the method comprises the following steps: the auxiliary positioning module comprises a step of identifying that the four-microphone array accurately positions the moving object; the acquisition module clearly captures the image or video single-frame picture information of the moving object according to the positioning; and comparing the similarity of the captured images to determine the real identity of the moving object.
As a preferred scheme of the method for realizing intelligent scene switching based on indoor positioning, the method comprises the following steps: the auxiliary positioning module also comprises the following processing steps of collecting a plurality of pieces of multi-state picture data of various family members as a training set, a test set and a verification set of the algorithm model; training the algorithm model; the acquisition module takes a live picture or video to extract the characteristic data of the moving object and inputs the trained algorithm model; the judging module judges whether the snap-shot person is a member of a family or not according to the output result of the algorithm model and the collected historical data and determines the similarity and reliability of specific identities; and finishing the identification of the specific real identity of the moving object.
As a preferred scheme of the method for realizing intelligent scene switching based on indoor positioning, the method comprises the following steps: the scene switching module comprises the following steps of obtaining the behavior state of the moving object in the current moving space; and recovering the recovery of the behavior state of the active object in the previous space scene in the next active space, thereby realizing seamless scene switching.
As a preferred scheme of the method for realizing intelligent scene switching based on indoor positioning, the method comprises the following steps: the scene switching module comprises a network controller, a device controller and a function module, and comprises the following switching steps that the network controller establishes seamless interconnection of indoor devices through a ZigBee wireless sensor network, and the indoor devices are added into the same gateway after verification; the device controller stores state parameters and history records of devices in different scenes; the function module starts and initializes the indoor equipment, and switches and synchronizes the state parameters of the corresponding equipment to complete scene switching according to the trigger event causing the scene switching.
As a preferred scheme of the method for realizing intelligent scene switching based on indoor positioning, the method comprises the following steps: the function module comprises an event trigger module, a scene switching control module and a scene execution module; the event triggering module generates triggering event information according to the position change or scene change after the auxiliary positioning module is combined with the accurate positioning module to perform identity judgment on the movable object; the scene switching control module receives the event information input by the event trigger module or the state parameters actively input by other equipment, correspondingly generates scene control information after receiving the event information or the state parameters and transmits the scene control information to the scene execution module; and the scene execution module executes the control information, synchronizes the state parameter setting of the corresponding equipment and completes scene switching.
The invention solves another technical problem that: the method is realized by depending on the system.
In order to solve the technical problems, the invention provides the following technical scheme: a system for realizing scene intelligent switching based on indoor positioning comprises an infrared sensing receiving device, a four-microphone array, an auxiliary positioning module and a scene switching module; the infrared sensing receiving equipment and the four-microphone array are used for positioning a moving object in an indoor space; the auxiliary positioning module is used for judging the identity of the positioned movable object; the scene switching module is used for switching the device state parameters of the active object in the current scene to the devices in the next scene and synchronizing the state parameters.
The invention has the beneficial effects that: firstly, the safety and the concealment are better; secondly, comprehensive positioning can be carried out according to the sound and the body temperature of the action of the active person or the multiple dimensions of being captured to the image in a certain range in the action process; the accurate positioning of the moving person in the moving process can be well assisted and detected in the indoor environment, and the confirmation degree for distinguishing the specific identity of the moving person is obviously improved; fourthly, based on the solution, a more effective and feasible technical method can be provided for switching the solutions such as the smart home between specific scenes.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the description below are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive labor. Wherein:
fig. 1 is a schematic overall flowchart structure diagram of a method for implementing scene intelligent switching based on indoor positioning according to a first embodiment of the present invention;
fig. 2 is a schematic structural diagram of a region to be located and a small-range location region according to a first embodiment of the present invention;
fig. 3 is a schematic view of a flow structure of scene switching according to a first embodiment of the present invention;
FIG. 4 is a schematic diagram illustrating a comparison of the performance of various positioning modes according to the first embodiment of the present invention;
fig. 5 is a schematic overall structural diagram of a system for implementing intelligent scene switching based on indoor positioning according to a second embodiment of the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention more comprehensible, embodiments accompanying figures of the present invention are described in detail below, and it is apparent that the described embodiments are a part, not all or all of the embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without making creative efforts based on the embodiments of the present invention, shall fall within the protection scope of the present invention.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, but the present invention may be practiced in other ways than those specifically described and will be readily apparent to those of ordinary skill in the art without departing from the spirit of the present invention, and therefore the present invention is not limited to the specific embodiments disclosed below.
Furthermore, reference herein to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one implementation of the invention. The appearances of the phrase "in one embodiment" in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments.
The present invention will be described in detail with reference to the drawings, wherein the cross-sectional views illustrating the structure of the device are not enlarged partially in general scale for convenience of illustration, and the drawings are only exemplary and should not be construed as limiting the scope of the present invention. In addition, the three-dimensional dimensions of length, width and depth should be included in the actual fabrication.
Also in the description of the present invention, it should be noted that the terms "upper, lower, inner and outer" and the like indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, which are only for convenience of description and simplification of description, but do not indicate or imply that the device or element referred to must have a specific orientation, be constructed and operated in a specific orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms first, second, or third are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
The terms "mounted, connected and connected" in the present invention are to be understood broadly, unless otherwise explicitly specified or limited, for example: can be fixedly connected, detachably connected or integrally connected; they may be mechanically, electrically, or directly connected, or indirectly connected through intervening media, or may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
Example 1
Referring to the schematic diagram of fig. 1, the schematic diagram is an overall flow structure diagram of a method for implementing scene intelligent switching based on indoor positioning in this embodiment, and is to solve the problems in the prior art that a microphone array is mainly used to implement sound source separation positioning, positioning accuracy is poor due to differences in used scenes, safety and concealment are not good, and a better solution is urgently needed to implement reliable positioning. The embodiment provides that the microphone array and the infrared sensor or the camera are combined to realize the accurate positioning of the position of the moving person and the judgment of the specific identity, so that the moving person can be positioned better and more accurately, and the function of distinguishing the specific identity of the moving person in a small-range moving place can be better realized; the method can accurately position the position and identity of the moving person, and simultaneously realize seamless switching of intelligent switching scenes in different moving spaces under scenes such as intelligent home and the like.
More specifically, the method for implementing intelligent scene switching based on indoor positioning in the embodiment includes the following steps:
s1: determining the range of a region 100 to be positioned in an indoor space, and dividing the region 100 to be positioned into a plurality of small-range positioning regions 101;
s2: an infrared sensing receiving device 200 and a group of four microphone arrays 300 which are positioned at four different positions in space are correspondingly arranged in each small-range positioning area 101. In this step, each small-range positioning area 101 is a detection range corresponding to the infrared sensing reception device 200 in each area, and the detection range of each small-range positioning area 101 covers the range of the area to be positioned 100, that is, each small-range positioning area 101 is actually determined by the detection range of the infrared sensing reception device 200, and in a popular way, the small-range positioning area 101 coincides with the detection range of the infrared sensing reception device 200. The area to be positioned 100 is formed by a plurality of small-range positioning areas 101, that is, the small-range positioning areas 101 cover the area to be positioned 100 together.
S3: the infrared sensing receiving equipment 200 detects the body surface temperature of the moving object in real time and determines the only small-range positioning area 101 where the moving object is located; the specific implementation in this step is that the infrared sensing reception device 200 is used as the center, the center of the infrared sensing reception device is used to radiate the detection range outwards, and when a detected moving object is located within the detection range of a certain infrared sensing reception device 200, the object is considered to be located in the small-range positioning area 101 at the center of the infrared sensing reception device 200, and the object is determined to be the only small-range positioning area 101.
S4: according to the determined unique small-range positioning region 101, the position of the moving object is accurately positioned by using the corresponding four-microphone array 300 in the current region; the four-microphone array 300 performs accurate positioning according to sound data generated by a moving object during movement based on an array formed by arranging a group of omnidirectional microphones located at four different positions in space of the small-range positioning region 101 according to a certain shape rule. The detection range of the same four-microphone array 300 covers the small-range localization area 101.
The four-microphone array 300 is a device for spatially sampling a spatially propagated sound signal, and further includes a near-field model and a far-field model which are divided according to the distance between a sound source emitted by the movement of a moving object and the microphone array, and may be a linear array, a planar array or a volume array according to the topological structure of the microphone array.
S5: the auxiliary positioning module 400 performs identity determination on the moving object in combination with the accurate positioning; the assisted positioning module 400 includes the steps of identifying,
the four-microphone array 300 accurately locates the moving object;
the acquisition module 401 clearly captures the image or video single-frame picture information of the moving object according to the positioning;
and comparing the similarity of the captured images to determine the real identity of the moving object.
Further, the assisted positioning module 400 further includes the following more specific processing steps:
collecting a plurality of multi-state picture data of various family members as a training set, a test set and a verification set of the algorithm model 402;
training the algorithm model 402;
the acquisition module 401 captures a live body picture or video to extract characteristic data of a moving object, and inputs the characteristic data into the trained algorithm model 402;
the judging module 403 judges whether the snap-shot person is a member of the family or not according to the output result of the algorithm model 402 and the collected historical data and determines the similarity and reliability of specific identities;
and finishing the identification of the specific real identity of the moving object.
S6: the scene switching module 500 precisely completes the scene switching operation for each moving object according to the identity of the moving object. The scene switching module 500 includes the following steps:
acquiring the behavior state of an active object in the current active space;
and recovering the recovery of the behavior state of the active object in the previous space scene in the next active space, thereby realizing seamless scene switching.
Further, the scene switching module 500 includes a network controller 501, a device controller 502, and a function module 503, and the scene switching module 500 further includes the following specific switching steps:
the network controller 501 establishes seamless interconnection of indoor devices through a ZigBee wireless sensor network, and adds the indoor devices into the same gateway after verification;
the device controller 502 stores status parameters and history of devices in different scenes;
the function module 503 starts and initializes the indoor device, and switches and synchronizes the state parameters of the corresponding devices to complete scene switching according to the trigger event causing scene switching.
In this step, it should be noted that the functional module 503 includes an event triggering module 503a, a scene switching control module 503b, and a scene executing module 503c; the event triggering module 503a generates triggering event information according to the position change or scene change after the auxiliary positioning module 400 performs identity determination on the moving object in combination with the accurate positioning; the scene switching control module 503b receives the event information input by the event triggering module 503a or the state parameters actively input by other devices, and correspondingly generates scene control information after receiving the event information or the state parameters, and transmits the scene control information to the scene execution module 503c; the scene execution module 503c executes the control information, synchronizes the state parameter settings of the corresponding devices, and completes scene switching.
Referring to the schematic of fig. 2, the embodiment should be further explained, wherein the schematic of fig. 2 is only a division of a plane space and there is only one set of small-range positioning areas 101. It is understood that the region to be located 100 and the small-range locating region 101 are actually space concepts and include an indoor three-dimensional space, and the detection radiation ranges of the infrared sensing receiving device 200 and the four-microphone array 300 are also in a three-dimensional state, and the two regions can respectively cover the region to be located 100 and the small-range locating region 101, so that in the indoor space, the infrared sensing receiving device 200 can perform rough small-range detection first, and then the four-microphone array 300 can perform accurate locating.
Meanwhile, in this embodiment, the small-range positioning area 101 needs to cover the area to be positioned 100, and then the infrared sensing receiving device 200 is installed in a plurality of different positions, and the detection positions thereof necessarily have overlapped parts, for example, a part a in the figure, if the parts detected by the two small-range positioning areas 101 together are distinguished according to the strength of signal reception, and when the signal strengths of the two sides are equal, the auxiliary positioning module 400 can be directly skipped to directly complete accurate positioning according to the spatial position coordinates of the infrared sensing receiving device 200. If the signals are not equal, the four microphone arrays 300 in the small-range positioning area 101 with stronger signals are selected for accurate positioning. If the distance is separately located according to the strength of the signal, the algorithm is complex, and the accuracy requirement of the signal receiver is high, and this embodiment proposes to use the four-microphone array 300 to accurately locate in the small-range locating area 101.
Furthermore, many sound source positioning methods based on microphone arrays, such as a technology based on sound time difference, have the core of the algorithm in accurately estimating propagation delay, obtaining the sound source position information by performing cross-correlation processing on signals among microphones, and performing controllable power response search by simple delay summation, geometric calculation, direct utilization of cross-correlation results and other methods. The improved phase transformation weighted controllable response power SRP-PHAT sound source positioning algorithm adopted by the embodiment has stronger robustness in a reverberation environment, improves the anti-noise and anti-reverberation capabilities, reduces the calculated amount, improves the real-time processing speed, can realize sound source positioning in a real environment, has no specific requirements on an array type, and is also suitable for a distributed array, so that the algorithm obtains better effect.
In this embodiment, the auxiliary positioning module 400 mainly uses the deep learning related technology of AI, the target detection and classification algorithm, and first collects multiple multi-state picture data of various family members as a training set, a testing set and a verification set of the algorithm, extracts feature data of an active person through a living picture or video captured by a camera, predicts and judges whether the captured person is a member of a family by combining a model trained by the algorithm, determines the similarity and reliability of a specific identity, and completes the identification of the specific real identity of the active person. Wherein the acquisition module 401 is a camera device, such as a video camera and a video camera, and the acquired data includes height, body shape and contour data of the subject, for example. The algorithm model 402 is an image recognition model, for example, a Python script is written using a convolutional neural network to classify images, a trained model is loaded, and an input image is classified, such as the latest deep learning image classification model in Keras: VGG16, VGG19, resNet50, inclusion v3, and Xception, and then use these architectures to classify images. The method comprises the steps of acquiring information, acquiring image, preprocessing the image (such as binarization processing, reverse color processing and other processing methods) to obtain characteristic data, training the process (classifier reference and classification decision), and identifying. The identity recognition of the moving object by the determining module 403 is completed by matching and comparing with the collected comparison data.
Referring to fig. 3, it is further illustrated that the scene switching process of the scene switching module 500 in this embodiment, and the main technologies and methods for assisting the positioning and the identity determination include: the infrared sensor (infrared sensing receiving equipment 200) is used for roughly positioning the detection of the body surface temperature according to the moving person or object; positioning by the four microphone arrays according to sounds emitted by the moving person, etc.; the camera collects the living body image or video information of the moving person nearby and predicts the probability of the family members by combining with a model trained by an AI deep learning algorithm so as to determine the real identity of the family members. It should be noted here that, the event triggering module 503a is triggered by a position change or a scene of the located moving object, for example, a scene deemed as an object changes when the position changes to a certain range, a corresponding scene description is defined based on a certain range, such as a scene in a living room in a certain range, a toilet scene in a certain range, or a bedroom in a certain range, and when the position detects that the moving object changes from the living room scene to the bedroom scene, the event triggering module 503a is triggered to generate corresponding control information, control and synchronize device status information from the living room to the bedroom, and finally complete scene switching.
Scene one:
in this embodiment, the actual areas of the family living room, the bedroom and the bathroom are used as the test areas, that is, the space of the area to be positioned 100 in this embodiment is formed jointly, the infrared sensing receiving device 200 and the four-microphone array 300 are disposed in the space, and the small-range positioning area 101 is radiated from the detection range of the infrared sensing receiving device 200 and covers the area to be positioned 100. In this scenario, the positioning method in the embodiment and the prior art are applied to positioning, 24 reference monitoring points are uniformly set in the test to cover the space of the area 100 to be positioned, and the coordinate (a) is correspondingly defined n ,b n ) According to the method, the infrared sensing receiving equipment 200 is arranged on 24 reference monitoring points to radiate small-range positioning areas 101, and four microphone arrays 300 are arranged in each small-range positioning area 101 to complete installation of the method. Compared with the traditional method, the method adopts four groups of independent positioning experiments of infrared sensing, a microphone array, bluetooth and geomagnetism, wherein the infrared sensing, the Bluetooth and the geomagnetism are all arranged on 24 reference monitoring points to carry out separate and independent tests; in addition, the microphone array removes 24 infrared sensing receiving devices 200 for independent test under the arrangement of the method.
The testing process is that each group of modes is positioned and tested for 5 times, error values of each time are calculated, and then 5 times of errors are averaged to obtain a final error value.
TABLE 1 orientation mode Performance comparison (unit: m)
Figure BDA0002215050620000091
And (3) carrying out simulation by utilizing Matlab to realize the positioning algorithm, and comparing infrared sensing positioning, microphone array positioning, two kinds of fusion positioning, bluetooth positioning and geomagnetic positioning. The error accumulation distribution obtained according to the experimental simulation is shown in the following graph (a), the abscissa is an error value, the ordinate is a corresponding error accumulation probability, and the composite represents the fusion positioning of the infrared sensing and the microphone array; micArray represents microphone array positioning; blueTooth denotes BlueTooth positioning; infaredSensor represents infrared sensing positioning; geomagnetism denotes Geomagnetic localization. The advantages of the infrared sensing and microphone array fusion positioning scheme are verified through comparison of the 5 positioning modes. The single infrared sensing positioning average error is 2.496m, and when the error is 2.52m, the positioning precision can reach 80%; the positioning average error of the single microphone array is 1.06m, and when the error is 1.12m, the positioning precision can reach 80%; the single Bluetooth positioning average error is 1.18m, and when the error is 1.24m, the positioning precision can reach 80%; the single geomagnetic positioning average error is 4.35m, and when the error is 4.42m, the positioning precision can reach 80%; the average positioning error of the fusion of the infrared sensing and the microphone array is 0.72m, and when the error is 0.82m, the positioning accuracy can reach 80%, and experiments show that the fusion positioning mode greatly improves the positioning accuracy by 67.5%, 22.7%, 30.5% and 81.15% compared with the single infrared sensing, microphone array, bluetooth and geomagnetic positioning accuracy respectively. Therefore, in combination with the schematic diagram of fig. 4, the embodiment has higher precision improvement compared with the traditional positioning mode, so that more precision of intelligent scene switching is ensured, and great significance is achieved.
Example 2
Referring to the schematic diagram of fig. 4, in order to implement the method for implementing intelligent scene switching based on indoor positioning, a system for implementing intelligent scene switching based on indoor positioning is provided in this embodiment. Specifically, the system comprises an infrared sensing receiving device 200, a four-microphone array 300, an auxiliary positioning module 400 and a scene switching module 500; wherein the infrared sensing reception device 200 and the four-microphone array 300 are used for localization of a moving object of an indoor space; the auxiliary positioning module 400 is used for determining the identity of the positioned moving object; the scene switching module 500 is used for switching the device state parameters of the active object in the current scene to the next scene and synchronizing the state parameters.
Further, the infrared sensing reception device 200 is an infrared sensing receiver that covers the area 100 to be positioned and is disposed at different positions in space, and is a device that can receive infrared signals and independently complete reception and output from infrared rays, and is compatible with TTL level signals, and the size is similar to that of a common plastic-sealed triode, and the device is suitable for various infrared remote controls and infrared data transmission, and in this embodiment, accurate detection of data is not required, and only the presence or absence of a detection signal is required to determine the unique small-range positioning area 101. The so-called microphone array is a sound collection system, and the system collects sounds from different spatial directions by using a plurality of microphones, and the acoustic problems of a plurality of rooms can be solved by adding corresponding algorithms (arrangement + algorithm) after the microphones are arranged according to specified requirements, such as sound source positioning, dereverberation, voice enhancement, blind source separation and the like, so as to realize positioning. The assistant positioning module 400 is a processing chip hardware embedded with an image recognition algorithm model, and is used for realizing image recognition of a moving object and confirming the identity of the moving object.
Wherein the scene switching module 500 comprises a network controller 501, a device controller 502 and a function module 503. The network controller 501 is also called a network card, network adapter or Network Interface Card (NIC), and if there is a computer without a network card in the network, the computer will not be able to communicate with other computers and will not get any service provided by the server. Certainly, if the server does not have a network card, the server is not called, so the network card is a necessary device for the server, and is just like a processor which is required by a common PC (personal computer). The network card on the PC is usually used to connect the PC to a LAN (local area network), and the server network card is generally used to connect the server to a network device such as a switch. The device controller 502 is an entity in a computer that has as its primary responsibility the control of one or more I/O devices to effect the exchange of data between the I/O devices and the computer. It is an interface between the CPU and the I/O device, it receives commands from the CPU and controls the I/O device to operate so as to free the processor from the cumbersome device control transactions, it is an addressable device that has only a unique device address when it controls only one device; if the control can connect with a plurality of devices, a plurality of device addresses are contained, and each device address corresponds to one device. The controller is used for controlling and coordinating the action control of the whole computer, and generally needs a Program Counter (PC), an Instruction Register (IR), an Instruction Decoder (ID), a timing and control circuit, a pulse source, an interrupt and the like to complete together.
The functional module 503 further includes a functional module 503 comprising an event triggering module 503a, a scene switching control module 503b, and a scene executing module 503c, wherein the functional module 503 is also a processing chip controlled by an implanted program, and is integrated with a chip of the auxiliary positioning module 400 into a hardware device, and the software portion provides operations through a computer interface. The function module 503 is therefore a software control part of the scene switching module 500, and the network controller 501 and the device controller 502 are hardware implementation parts of the scene switching module 500. Correspondingly cooperates with the auxiliary positioning module 400.
As used in this application, the terms "component," "module," "system," and the like are intended to refer to a computer-related entity, either hardware, firmware, a combination of hardware and software, or software in execution. For example, a component may be, but is not limited to being: a process running on a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of example, both an application running on a computing device and the computing device can be a component. One or more components can reside within a process and/or thread of execution and a component can be localized on one computer and/or distributed between two or more computers. In addition, these components can execute from various computer readable media having various data structures thereon. The components may communicate by way of local and/or remote processes such as in accordance with a signal having one or more data packets (e.g., data from one component interacting with another component in a local system, distributed system, and/or across a network such as the internet with other systems by way of the signal).
It should be noted that the above-mentioned embodiments are only for illustrating the technical solutions of the present invention and not for limiting, and although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention, which should be covered by the claims of the present invention.

Claims (6)

1. A method for realizing scene intelligent switching based on indoor positioning is characterized in that: comprises the following steps of (a) carrying out,
determining the range of a region (100) to be positioned in indoor space, and dividing the region (100) to be positioned into a plurality of small-range positioning regions (101);
correspondingly installing infrared sensing receiving equipment (200) and a group of four microphone arrays (300) positioned at four different positions in space in each small-range positioning area (101); each small-range positioning area (101) is a detection range corresponding to the infrared sensing receiving equipment (200) in each area, and the detection range of each small-range positioning area (101) covers the range of the area (100) to be positioned, namely each small-range positioning area (101) is actually determined by the detection range of the infrared sensing receiving equipment (200), and the small-range positioning area (101) is overlapped with the detection range of the infrared sensing receiving equipment (200); the area (100) to be positioned is composed of a plurality of small-range positioning areas (101), namely, the small-range positioning areas (101) jointly cover the area (100) to be positioned;
the infrared sensing receiving equipment (200) detects the body surface temperature of the moving object in real time and determines the only small-range positioning area (101) where the moving object is located;
according to the determined unique small-range positioning area (101), accurately positioning the position of the moving object by using the corresponding four microphone arrays (300) in the current area;
the to-be-positioned area (100) and the small-range positioning area (101) are indoor three-dimensional spaces, the detection radiation ranges of the infrared sensing receiving equipment (200) and the four-microphone array (300) are also in a three-dimensional state, and the infrared sensing receiving equipment and the four-microphone array can respectively cover the to-be-positioned area (100) and the small-range positioning area (101), so that in the indoor space, the infrared sensing receiving equipment (200) can perform rough small-range detection and then the four-microphone array (300) can perform accurate positioning;
the auxiliary positioning module (400) performs identity judgment on the movable object in combination with accurate positioning;
the auxiliary positioning module (400) comprises the steps of identifying,
the four-microphone array (300) precisely locates the moving object;
the acquisition module (401) clearly captures the image or video single-frame picture information of the moving object according to the positioning;
comparing the similarity of the captured images to determine the real identity of the moving object;
the auxiliary positioning module (400) further comprises the following processing steps,
collecting a plurality of pieces of multi-state picture data of various family members as a training set, a testing set and a verification set of an algorithm model (402);
training the algorithmic model (402);
the acquisition module (401) takes a live picture or video to extract feature data of the moving object, and inputs the trained algorithm model (402);
the judging module (403) judges whether the snap-shot person is a member of the family or not according to the output result of the algorithm model (402) and the collected historical data and determines the similarity and reliability of specific identities;
adopting an improved phase transformation weighted controllable response power SRP-PHAT sound source positioning algorithm;
completing the identification of the specific real identity of the moving object; the scene switching module (500) is used for accurately completing the scene switching operation for each moving object according to the identity of the moving object;
the scene switching module (500) comprises a network controller (501), a device controller (502) and a function module (503), comprising the following switching steps,
the network controller (501) establishes seamless interconnection of indoor equipment through a ZigBee wireless sensor network, and adds the indoor equipment into the same gateway after verification;
the device controller (502) stores status parameters and history of devices in different scenarios;
the function module (503) starts and initializes the indoor equipment, and switches and synchronizes the state parameters of the corresponding equipment to complete scene switching according to the trigger event causing the scene switching;
the functional module (503) comprises an event trigger module (503 a), a scene switching control module (503 b) and a scene execution module (503 c);
the event triggering module (503 a) generates triggering event information according to the position change or scene change after the auxiliary positioning module (400) combines with accurate positioning to perform identity judgment on the moving object;
the scene switching control module (503 b) receives the event information input by the event triggering module (503 a) or the state parameters actively input by other devices, and correspondingly generates scene control information after receiving the event information or the state parameters, and transmits the scene control information to the scene execution module (503 c);
the scene execution module (503 c) executes the control information, synchronizes the state parameter settings of the corresponding devices and completes scene switching.
2. The method for implementing intelligent scene switching based on indoor positioning as claimed in claim 1, wherein: each small-range positioning area (101) is a detection range corresponding to the infrared sensing receiving equipment (200) in each area, and the detection range of each small-range positioning area (101) covers the range of the area to be positioned (100).
3. The method for realizing intelligent scene switching based on indoor positioning as claimed in claim 1 or 2, wherein: the four-microphone array (300) is formed by arranging a group of omnidirectional microphones positioned at four different positions in space of the small-range positioning area (101) according to a certain shape rule, and accurate positioning is carried out according to sound data generated by the moving object in the moving process.
4. The method of claim 3 for enabling intelligent switching of scenes based on indoor positioning, wherein: the four-microphone array (300) is a device for spatially sampling a spatially propagated sound signal, and further comprises a step of dividing the array into a near-field model and a far-field model according to the distance between a sound source emitted by the movement of the moving object and the microphone array, and a step of dividing the array into a linear array, a planar array or a volume array according to the topological structure of the microphone array.
5. The method of claim 4 for implementing intelligent scene cut based on indoor positioning, wherein: the scene switching module (500) comprises the steps of,
acquiring the behavior state of the active object in the current active space;
and recovering the recovery of the behavior state of the active object in the previous space scene in the next active space, thereby realizing seamless scene switching.
6. A system for implementing intelligent scene switching based on indoor positioning, which is used for implementing the method for implementing intelligent scene switching based on indoor positioning as claimed in claim 1, and is characterized in that: the system comprises an infrared sensing receiving device (200), a four-microphone array (300), an auxiliary positioning module (400) and a scene switching module (500);
the infrared sensing reception device (200) and the four-microphone array (300) are used for positioning of moving objects of an indoor space;
the auxiliary positioning module (400) is used for judging the identity of the positioned movable object;
the scene switching module (500) is used for switching the device state parameters of the current scene of the moving object to the devices of the next scene and synchronizing the state parameters.
CN201910912256.2A 2019-09-25 2019-09-25 Method and system for realizing intelligent scene switching based on indoor positioning Active CN110597077B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910912256.2A CN110597077B (en) 2019-09-25 2019-09-25 Method and system for realizing intelligent scene switching based on indoor positioning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910912256.2A CN110597077B (en) 2019-09-25 2019-09-25 Method and system for realizing intelligent scene switching based on indoor positioning

Publications (2)

Publication Number Publication Date
CN110597077A CN110597077A (en) 2019-12-20
CN110597077B true CN110597077B (en) 2022-11-18

Family

ID=68863379

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910912256.2A Active CN110597077B (en) 2019-09-25 2019-09-25 Method and system for realizing intelligent scene switching based on indoor positioning

Country Status (1)

Country Link
CN (1) CN110597077B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111245688A (en) * 2019-12-26 2020-06-05 的卢技术有限公司 Method and system for intelligently controlling electrical equipment based on indoor environment
CN111615028A (en) * 2020-06-19 2020-09-01 歌尔科技有限公司 Temperature detection method and device, sound box equipment and storage medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9094645B2 (en) * 2009-07-17 2015-07-28 Lg Electronics Inc. Method for processing sound source in terminal and terminal using the same
CN103902963B (en) * 2012-12-28 2017-06-20 联想(北京)有限公司 The method and electronic equipment in a kind of identification orientation and identity
CN103777615A (en) * 2014-01-28 2014-05-07 宇龙计算机通信科技(深圳)有限公司 Control device and method of intelligent home appliance
CN104896668B (en) * 2015-05-29 2017-12-12 广东美的暖通设备有限公司 A kind of room air conditioner intelligent regulating system and its adjusting method
CN105137771B (en) * 2015-07-27 2017-11-14 上海斐讯数据通信技术有限公司 A kind of intelligent appliance control system and method based on mobile terminal
US10762640B2 (en) * 2017-05-22 2020-09-01 Creavision Technologies, Ltd. Systems and methods for user detection, identification, and localization within a defined space

Also Published As

Publication number Publication date
CN110597077A (en) 2019-12-20

Similar Documents

Publication Publication Date Title
WO2021093872A1 (en) Crowdsensing-based multi-source information fusion indoor positioning method and system
Wang et al. Accurate and real-time 3-D tracking for the following robots by fusing vision and ultrasonar information
Ngamakeur et al. A survey on device-free indoor localization and tracking in the multi-resident environment
Li et al. Smart home monitoring system via footstep-induced vibrations
CN104394588B (en) Indoor orientation method based on Wi Fi fingerprints and Multidimensional Scaling
CN110597077B (en) Method and system for realizing intelligent scene switching based on indoor positioning
CN109389641A (en) Indoor map integrated data generation method and indoor method for relocating
CN108828501B (en) Method for real-time tracking and positioning of mobile sound source in indoor sound field environment
Xiao et al. Abnormal behavior detection scheme of UAV using recurrent neural networks
Wendeberg et al. Robust tracking of a mobile beacon using time differences of arrival with simultaneous calibration of receiver positions
JP2018061114A (en) Monitoring device and monitoring method
WO2021094766A1 (en) Occupant detection systems
CN111610492A (en) Multi-acoustic sensor array intelligent sensing method and system
US20240053464A1 (en) Radar Detection and Tracking
CN107680312A (en) Security-protecting and monitoring method, electronic equipment and computer-readable recording medium
CN112379330B (en) Multi-robot cooperative 3D sound source identification and positioning method
Vashist et al. KF-Loc: A Kalman filter and machine learning integrated localization system using consumer-grade millimeter-wave hardware
US11729372B2 (en) Drone-assisted sensor mapping
Kudo et al. Utilizing WiFi signals for improving SLAM and person localization
Martinson et al. Robotic discovery of the auditory scene
Even et al. Probabilistic 3-D mapping of sound-emitting structures based on acoustic ray casting
Ding et al. Microphone array acoustic source localization system based on deep learning
Zhu et al. Speaker localization based on audio-visual bimodal fusion
Yoo et al. Multi-target tracking with multiple 2D range scanners
US20210263531A1 (en) Mapping and simultaneous localisation of an object in an interior environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant