CN114281907A - Indoor environment advancing processing method and device, electronic equipment and readable storage medium - Google Patents

Indoor environment advancing processing method and device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN114281907A
CN114281907A CN202111284418.6A CN202111284418A CN114281907A CN 114281907 A CN114281907 A CN 114281907A CN 202111284418 A CN202111284418 A CN 202111284418A CN 114281907 A CN114281907 A CN 114281907A
Authority
CN
China
Prior art keywords
positioning
indoor environment
instruction
information
place
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111284418.6A
Other languages
Chinese (zh)
Inventor
廖柏锠
廖加威
任晓华
黄晓琳
董粤强
赵慧斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202111284418.6A priority Critical patent/CN114281907A/en
Publication of CN114281907A publication Critical patent/CN114281907A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Navigation (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses an indoor environment advancing processing method and device, electronic equipment and a readable storage medium, relates to the technical field of data processing and the technical field of image processing, and particularly relates to the technical fields of artificial intelligence such as computer vision, intelligent voice technology and intelligent traffic technology. The specific implementation scheme is as follows: acquiring a positioning instruction provided by a user in an indoor environment, wherein the positioning instruction comprises a gesture instruction and a voice instruction; according to the positioning instruction, positioning position information of a positioning place indicated by the positioning instruction in the indoor environment and positioning description information of the positioning place are obtained; and performing association processing on the positioning position information and the positioning description information so as to perform automatic proceeding of the indoor environment.

Description

Indoor environment advancing processing method and device, electronic equipment and readable storage medium
Technical Field
The present disclosure relates to the field of data processing technology and image processing technology, and in particular, to the field of artificial intelligence technology such as computer vision, intelligent voice technology, and intelligent traffic technology.
Background
With the vigorous development of the robot industry, more and more robots are applied to indoor environments such as warehouses, factories, hospitals, markets, hotels, office places and the like to provide services such as article distribution and distribution, guidance and the like, so that the production efficiency and the service efficiency are greatly improved, the labor cost is reduced, and convenience is provided for users.
Generally, autonomous navigation of a robot in an indoor environment requires path planning based on a manually mapped indoor environment map in which a specific positioning point is designated.
Disclosure of Invention
The disclosure provides an indoor environment advancing processing method and device, electronic equipment and a readable storage medium.
According to an aspect of the present disclosure, there is provided a travel processing method of an indoor environment, including:
acquiring a positioning instruction provided by a user in an indoor environment, wherein the positioning instruction comprises a gesture instruction and a voice instruction;
according to the positioning instruction, positioning position information of a positioning place indicated by the positioning instruction in the indoor environment and positioning description information of the positioning place are obtained;
and performing association processing on the positioning position information and the positioning description information so as to perform automatic proceeding of the indoor environment.
According to another aspect of the present disclosure, there is provided a travel processing apparatus for an indoor environment, including:
the system comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring a positioning instruction provided by a user in an indoor environment, and the positioning instruction comprises a gesture instruction and a voice instruction;
the positioning unit is used for acquiring positioning position information of a positioning place indicated by the positioning instruction in the indoor environment and positioning description information of the positioning place according to the positioning instruction;
and the association unit is used for associating the positioning position information with the positioning description information so as to automatically advance the indoor environment.
According to still another aspect of the present disclosure, there is provided an electronic device including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to cause the at least one processor to perform the method of the aspects and any possible implementation described above.
According to yet another aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of the above-described aspect and any possible implementation.
According to yet another aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the method of the aspect and any possible implementation as described above.
According to the technical scheme, the embodiment of the disclosure does not need manual participation, is simple to operate and high in accuracy, and therefore, the efficiency and the reliability of indoor environment positioning are improved.
In addition, by adopting the technical scheme provided by the disclosure, the user experience can be effectively improved.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
To more clearly illustrate the technical solutions in the embodiments of the present disclosure, the drawings needed for the embodiments or the prior art descriptions will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present disclosure, and those skilled in the art can also obtain other drawings according to the drawings without inventive labor. The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1A is a schematic diagram according to a first embodiment of the present disclosure;
FIG. 1B is a schematic diagram of a gesture command provided by a user in the embodiment corresponding to FIG. 1A;
FIG. 1C is another schematic diagram of a gesture command provided by a user in the embodiment corresponding to FIG. 1A;
FIG. 2 is a schematic diagram according to a second embodiment of the present disclosure;
fig. 3 is a block diagram of an electronic device for implementing a travel processing method of an indoor environment according to an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
It is to be understood that the described embodiments are only a few, and not all, of the disclosed embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
It should be noted that the terminal device involved in the embodiments of the present disclosure may include, but is not limited to, a mobile phone, a Personal Digital Assistant (PDA), a wireless handheld device, a Tablet Computer (Tablet Computer), and other intelligent devices; the display device may include, but is not limited to, a personal computer, a television, and the like having a display function.
In addition, the term "and/or" herein is only one kind of association relationship describing an associated object, and means that there may be three kinds of relationships, for example, a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
An indoor navigation system of an existing robot and the like capable of advancing an intelligent terminal needs to scan an indoor environment to construct a surveying and mapping map by utilizing sensing equipment such as a laser radar, a visual camera and the like in advance. Furthermore, the map is further checked manually through desktop terminal devices such as computers, tablets, mobile phones and the like, and specific positioning points are marked, so that the mapping map of the indoor environment is perfected.
If the anchor point needs to be updated, for example, the anchor point is added, the anchor point is modified, and the like, the anchor point needs to be manually adjusted again.
In addition, the portable intelligent terminal can only travel to a given positioning point based on the mapping map according to the given positioning point.
The existing indoor navigation based on a mapping map can be carried out on an intelligent terminal, and the flexibility and the usability are poor.
Therefore, it is desirable to provide a more flexible and easy-to-use processing method for performing indoor navigation based on a mapping map, which is capable of being performed in an intelligent terminal.
Fig. 1A is a schematic diagram according to a first embodiment of the present disclosure, as shown in fig. 1A.
101. The method comprises the steps of obtaining a positioning instruction provided by a user in an indoor environment, wherein the positioning instruction comprises a gesture instruction and a voice instruction.
102. According to the positioning instruction, positioning position information of a positioning place indicated by the positioning instruction in the indoor environment and positioning description information of the positioning place are obtained.
103. And performing association processing on the positioning position information and the positioning description information so as to perform automatic proceeding of the indoor environment.
Therefore, the positioning position information of any positioning place in the indoor environment and the corresponding positioning description information are correlated, and a more convenient automatic traveling scheme in the indoor environment can be provided based on the correlation processing.
For example, update operations such as new addition of a localization place, modification of a localization place, deletion of a localization place, and the like may be performed on an initial mapping map of an indoor environment based on the association processing performed to construct an updated mapping map, thereby rapidly achieving updating of the mapping map.
Or, for another example, without adding a specific positioning location on the mapping map in advance, the portable intelligent terminal such as a delivery robot or a vehicle may be directly instructed to automatically travel to the specific positioning location in the indoor environment based on the performed association processing, so that the automatic travel control of the indoor environment is quickly realized.
It should be noted that part or all of the execution subjects of 101 to 103 may be an application located in a local terminal (for example, a portable intelligent terminal such as a delivery robot or a vehicle), or may also be a functional unit such as a Software Development Kit (SDK) or the like provided in the application located in the local terminal, or may also be a processing engine located in a server on the network side, or may also be a distributed system located on the network side, for example, a processing engine or a distributed system in a portable intelligent terminal processing platform of an indoor environment on the network side, and the embodiment is not particularly limited thereto.
It is to be understood that the application may be a native application (native app) installed on the local terminal, or may also be a web page program (webApp) of a browser on the local terminal, which is not limited in this embodiment.
In this way, by obtaining a positioning instruction including a gesture instruction and a voice instruction provided by a user in an indoor environment, and further obtaining, according to the positioning instruction, positioning location information of a positioning location indicated by the positioning instruction in the indoor environment and positioning description information of the positioning location, so that the positioning location information and the positioning description information can be associated for performing automatic traveling of the indoor environment, due to the adoption of an instruction mode combining the gesture instruction and the voice instruction, so that the positioning location information of the positioning location indicated by the gesture instruction in the indoor environment and the positioning description information of the positioning location indicated by the voice instruction can be automatically associated, based on the association processing performed, a convenient automatic traveling scheme in the indoor environment can be provided, for example, the method has the advantages that the updating operations such as newly adding the positioning places, modifying the positioning places, deleting the positioning places and the like are executed on the initial mapping map of the indoor environment to construct the updated mapping map, the distribution robots, vehicles and the like can be directly indicated to automatically advance in the indoor environment through the intelligent terminals, manual participation is not needed, the operation is simple, the accuracy is high, and therefore the indoor environment positioning efficiency and reliability are improved.
According to the technical scheme, the gesture recognition and voice recognition interaction technology is utilized, the new location can be rapidly added on the mapping map, and the efficiency and reliability of mapping map construction are improved.
According to the technical scheme, by means of the gesture recognition and voice recognition interaction technology, the intelligent terminal capable of advancing such as a distribution robot and a vehicle can be rapidly indicated to automatically advance to a specific positioning place in an indoor environment, the flexibility and the usability of the intelligent terminal capable of advancing for navigation according to a surveying and mapping map are improved, and the operation is more natural and humanized.
In the disclosure, a common distribution robot can be used as a feasible intelligent terminal to collect a positioning instruction provided by a user in an indoor environment so as to execute 101-103.
Optionally, in a possible implementation manner of this embodiment, before 101, the position and the orientation of the advanceable smart terminal may be further adjusted according to the body position and the face orientation of the user, for example, the image capture device of the advanceable smart terminal is adjusted to face the front of the user, so as to ensure that the advanceable smart terminal can accurately capture the positioning instruction provided by the user.
Then, after adjusting the position and orientation of the advanceable smart terminal, gesture instructions and voice instructions provided by the user within the indoor environment may be collected by the advanceable smart terminal for execution 101.
Optionally, in a possible implementation manner of this embodiment, in 102, specifically, according to the gesture instruction, location position information of a location place indicated by the gesture instruction in the indoor environment may be obtained, and according to the voice instruction, location description information of the location place may be obtained. Wherein the content of the first and second substances,
the positioning position information of the positioning place refers to position description information for identifying a relative position of the positioning place in the indoor environment, for example, an entrance 50m to the east, and the like.
The positioning description information of the positioning place refers to description information for describing basic attributes of the positioning place, for example, a name attribute of the positioning place indicated by the voice instruction "this is the foreground", a function attribute of the positioning place indicated by the voice instruction "this is the toilet", and the like, or description information that can also be used for describing a travel attribute of the positioning place, for example, a travel destination attribute indicated by the voice instruction "here and so on".
In a specific implementation process, the gesture instruction may be specifically subjected to gesture recognition processing to determine a gesture direction pointed by a finger of the user. Obtaining positioning position information of a positioning place indicated by the gesture instruction in the indoor environment based on the determined gesture direction pointed by the user finger.
Specifically, the direction information pointed by the user finger corresponding to the gesture instruction may be obtained according to the gesture instruction, and further, the relative distance between the location point and the user may be obtained according to the direction information pointed by the user finger and the height information of the user finger from the ground. Then, the positioning position information of the positioning place can be obtained according to the positioning position information of the user in the indoor environment and the relative distance between the positioning place and the user. Wherein the content of the first and second substances,
the direction information pointed by the user's finger may refer to an angle between a gesture direction pointed by the user's finger and a specific reference direction (e.g., a horizontal direction or a vertical direction, etc.).
The height information of the user finger from the ground may be a distance from a fingertip position of the user finger to the ground.
For example, as shown in fig. 1B, a distance h between a fingertip portion of the user finger and the ground and an angle α between a gesture direction pointed by the user finger and a specific reference direction (e.g., a horizontal direction or a vertical direction) may be specifically obtained, and then, a relative distance D between the location point and the user, that is, D ═ h tan α, may be obtained according to the distance h between the fingertip portion of the user finger and the ground and the angle α between the gesture direction pointed by the user finger and the specific reference direction (e.g., the horizontal direction or the vertical direction), and then, based on the relative distance, a relative position between the location point and the user may be determined. Then, the positioning position information of the positioning place in the indoor environment can be obtained according to the positioning position information of the user in the indoor environment and the relative position between the positioning place and the user. Wherein the positioning position information of the positioning location in the indoor environment may be relative position information between the positioning location and a reference position of the indoor environment.
In the implementation process, the distance h from the fingertip position of the user finger to the ground can be acquired in various ways.
For example, as shown in fig. 1C, a relative distance d1 between the advanceable smart terminal and the fingertip of the finger of the user and a relative distance d2 between the position of the advanceable smart terminal and the position of the user may be collected by using a laser ranging technique. Acquiring the relative distance d1 between the travelable intelligent terminal and the fingertip part of the finger of the user, acquired by the travelable intelligent terminal, and the travelable intelligent terminalAfter the relative distance d2 between the position of the end and the position of the user, the height H of the fingertip portion of the finger of the user from the ground, that is, the height H of the fingertip portion of the finger of the user from the ground, can be further obtained according to the reference height H of the pre-configured advanceable intelligent terminal, the obtained relative distance d1 between the advanceable intelligent terminal and the fingertip portion of the finger of the user, and the obtained relative distance d2 between the position of the advanceable intelligent terminal and the position of the user
Figure BDA0003332450960000071
In this implementation, the obtaining of the included angle α between the gesture direction pointed by the user's finger and a specific reference direction (e.g., a horizontal direction or a vertical direction, etc.) may be performed in various manners.
For example, image information of a user finger may be specifically acquired by the portable intelligent terminal, and then, according to the image information of the user finger, an included angle α between a gesture direction pointed by the user finger and a specific reference direction (e.g., a horizontal direction or a vertical direction) is determined.
In another specific implementation process, the voice instruction may be specifically subjected to voice recognition processing to obtain a voice recognition result of the voice instruction. After the voice recognition result is obtained, the attribute keywords in the voice recognition result can be further recognized by utilizing a keyword recognition technology such as a named entity recognition technology and the like according to the voice recognition result to be used as the positioning description information of the positioning place.
Therefore, by means of the gesture recognition and voice recognition interaction technology, the positioning position information of the newly added positioning place and the positioning description information of the positioning place can be rapidly determined, so that the association processing of the newly added positioning place and the positioning description information of the positioning place can be conveniently and rapidly carried out, the newly added positioning place can be rapidly added on the mapping map, and the efficiency and the reliability of mapping map construction are improved. Meanwhile, the portable intelligent terminals such as the distribution robots and the vehicles can be rapidly indicated to automatically advance to specific positioning places in the indoor environment, the flexibility and the usability of the portable intelligent terminals for navigation according to the surveying and mapping map are improved, and the operation is more natural and humanized.
Optionally, in a possible implementation manner of this embodiment, in 103, the positioning location information and the positioning description information may be specifically subjected to static association processing on an initial mapping map of the indoor environment, so as to construct an updated mapping map of the indoor environment.
Therefore, the user can indicate the positioning place of a newly added positioning place on the initial mapping map serving as the indoor environment by combining the gesture command and the voice command, and then can perform travel association processing on the corresponding area of the positioning description information of the positioning place on the initial mapping map according to the positioning position information of the positioning place, so that the updating efficiency and reliability of the mapping map can be improved.
In one particular implementation, prior to 103, an initial mapping map of the indoor environment may be further acquired.
For example, an initial mapping map of the indoor environment may be constructed in particular. In particular, the ambient environment may be detected with a sensing device in the indoor environment to build an initial mapping map of the indoor environment.
Alternatively, and for another example, an initial mapping map of the indoor environment constructed by another specific terminal device may be acquired. The specific terminal device may specifically detect a surrounding environment in the indoor environment by using a sensing device to construct an initial mapping map of the indoor environment. Furthermore, the initial mapping map constructed manually on the specific terminal device can be used for marking gates, entrances and other simple specific positioning places so as to complete the initial mapping map of the indoor environment.
Therefore, the initial mapping map constructed based on the surrounding environment is obtained and can be used as a reference mapping map to correlate the newly-added positioning places, and the efficiency and the reliability of mapping map construction can be effectively improved.
In another specific implementation process, in 103, the positioning location may be marked on the initial mapping map according to the positioning location information to obtain a map element corresponding to the positioning location on the initial mapping map, and then, the positioning description information of the positioning location may be associated with the map element to construct an updated mapping map of the indoor environment.
Therefore, the newly added positioning points are marked out on the initial mapping map of the indoor environment by using the corresponding map elements, and then the positioning description information of the positioning points and the marked map elements can be associated, so that the updated mapping map of the indoor environment is constructed and used as a navigation basis for a distribution robot, a vehicle and the like which can advance to an intelligent terminal.
Optionally, in a possible implementation manner of this embodiment, in 103, the positioning location information and the positioning description information may be specifically subjected to dynamic association processing, so as to generate the current travel instruction in the indoor environment.
Therefore, the user can use the combined indication of the gesture instruction and the voice instruction as the positioning place of the traveling destination point, and further can perform traveling correlation processing according to the positioning position information of the positioning place to generate the traveling instruction, so that the efficiency and the reliability of automatic traveling of the intelligent terminal capable of traveling can be improved.
In this embodiment, by obtaining a positioning instruction including a gesture instruction and a voice instruction provided by a user in an indoor environment, and further obtaining, according to the positioning instruction, positioning location information of a positioning location indicated by the positioning instruction in the indoor environment and positioning description information of the positioning location, so that the positioning location information and the positioning description information can be associated for automatic travel of the indoor environment, due to the adoption of an instruction manner combining the gesture instruction and the voice instruction, so that the positioning location information of the positioning location indicated by the gesture instruction in the indoor environment and the positioning description information of the positioning location indicated by the voice instruction can be automatically associated, and based on the association, a convenient automatic travel scheme in the indoor environment can be provided, for example, the updating operations such as adding a new positioning place, modifying the positioning place, deleting the positioning place and the like are executed on the initial mapping map of the indoor environment to construct an updated mapping map, the automatic traveling of the movable intelligent terminals such as the distribution robots and the vehicles in the indoor environment is directly indicated, the manual participation is not needed, the operation is simple, the accuracy is high, and the efficiency and the reliability of the positioning of the indoor environment are improved.
In addition, by adopting the technical scheme provided by the disclosure, the user experience can be effectively improved.
It is noted that while for simplicity of explanation, the foregoing method embodiments have been described as a series of acts or combination of acts, it will be appreciated by those skilled in the art that the present disclosure is not limited by the order of acts, as some steps may, in accordance with the present disclosure, occur in other orders and concurrently. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required for the disclosure.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
Fig. 2 is a schematic diagram according to a second embodiment of the present disclosure, as shown in fig. 2. The travel processing device 200 of the indoor environment of the present embodiment may include an acquisition unit 201, a positioning unit 202, and an association unit 203. The acquiring unit 201 is configured to acquire a positioning instruction provided by a user in an indoor environment, where the positioning instruction includes a gesture instruction and a voice instruction; a positioning unit 202, configured to obtain, according to the positioning instruction, positioning position information of a positioning location indicated by the positioning instruction in the indoor environment and positioning description information of the positioning location; an associating unit 203, configured to perform association processing on the positioning location information and the positioning description information, so as to perform automatic traveling of the indoor environment.
It should be noted that, part or all of the travel processing apparatus of the indoor environment of the present embodiment may be an application located in a local terminal (i.e., a portable intelligent terminal such as a robot or a vehicle), or may also be a functional unit such as a Software Development Kit (SDK) or the like provided in the application located in the local terminal, or may also be a processing engine located in a server on the network side, or may also be a distributed system located on the network side, for example, a processing engine or a distributed system in a portable intelligent terminal processing platform of the indoor environment on the network side, and the present embodiment is not particularly limited thereto.
It is to be understood that the application may be a native application (native app) installed on the local terminal, or may also be a web page program (webApp) of a browser on the local terminal, which is not limited in this embodiment.
Optionally, in a possible implementation manner of this embodiment, the positioning unit 202 may be specifically configured to obtain, according to the gesture instruction, positioning location information of a positioning location indicated by the gesture instruction in the indoor environment; and obtaining the positioning description information of the positioning place according to the voice instruction.
In a specific implementation process, the positioning unit 202 may be specifically configured to obtain, according to the gesture instruction, direction information pointed by a user finger corresponding to the gesture instruction; obtaining the relative distance between the positioning place and the user according to the direction information pointed by the user finger and the height information of the user finger from the ground; and obtaining the positioning position information of the positioning place according to the positioning position information of the user in the indoor environment and the relative distance between the positioning place and the user.
Optionally, in a possible implementation manner of this embodiment, the associating unit 203 may be specifically configured to perform static association processing on the positioning location information and the positioning description information on an initial mapping map of the indoor environment to construct an updated mapping map of the indoor environment.
In a specific implementation, the associating unit 203 may be further configured to obtain an initial mapping map of the indoor environment.
For example, the associating unit 203 may be further configured to detect a surrounding environment in the indoor environment by using a sensing device to construct an initial mapping map of the indoor environment.
Alternatively, for another example, the associating unit 203 may be further configured to obtain an initial mapping map of the indoor environment constructed by a specific terminal device.
In another specific implementation process, the associating unit 203 may be specifically configured to perform a marking process on the initial mapping map on the location point according to the location position information, so as to obtain a map element corresponding to the location point on the initial mapping map; and associating the positioning description information of the positioning place with the map element to construct an updated mapping map of the indoor environment.
Optionally, in a possible implementation manner of this embodiment, the associating unit 203 may be specifically configured to perform dynamic association processing on the positioning location information and the positioning description information to generate the current travel instruction in the indoor environment.
It should be noted that the method in the embodiment corresponding to fig. 1A may be implemented by the indoor environment traveling processing device provided in this embodiment. For a detailed description, reference may be made to relevant contents in the embodiment corresponding to fig. 1A, and details are not described here.
In this embodiment, the obtaining unit obtains a positioning instruction including a gesture instruction and a voice instruction, which is provided by a user in an indoor environment, and the positioning unit obtains, according to the positioning instruction, positioning location information of a positioning location indicated by the positioning instruction in the indoor environment and positioning description information of the positioning location, so that the associating unit can associate the positioning location information and the positioning description information for performing automatic traveling of the indoor environment, and because an instruction manner combining the gesture instruction and the voice instruction is adopted, the positioning location information of the positioning location indicated by the gesture instruction in the indoor environment and the positioning description information of the positioning location indicated by the voice instruction can be automatically associated, and based on the association, a convenient automatic traveling scheme in the indoor environment can be provided, for example, the updating operations such as adding a new positioning place, modifying the positioning place, deleting the positioning place and the like are executed on the initial mapping map of the indoor environment to construct an updated mapping map, the automatic traveling of the movable intelligent terminals such as the distribution robots and the vehicles in the indoor environment is directly indicated, the manual participation is not needed, the operation is simple, the accuracy is high, and the efficiency and the reliability of the positioning of the indoor environment are improved.
In addition, by adopting the technical scheme provided by the disclosure, the user experience can be effectively improved.
In the technical scheme of the disclosure, the acquisition, storage, application and the like of the related user information, for example, the position information of the user, the gesture instruction of the user, the voice instruction of the user and the like, all conform to the regulations of related laws and regulations, and do not violate the customs of the public order.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
FIG. 3 illustrates a schematic block diagram of an example electronic device 300 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital assistants, cellular telephones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 3, the electronic device 300 includes a computing unit 301 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM)302 or a computer program loaded from a storage unit 308 into a Random Access Memory (RAM) 303. In the RAM 303, various programs and data necessary for the operation of the electronic apparatus 300 can also be stored. The calculation unit 301, the ROM 302, and the RAM 303 are connected to each other via a bus 304. An input/output (I/O) interface 305 is also connected to bus 304.
A number of components in the electronic device 300 are connected to the I/O interface 305, including: an input unit 306 such as a keyboard, a mouse, or the like; an output unit 307 such as various types of displays, speakers, and the like; a storage unit 308 such as a magnetic disk, optical disk, or the like; and a communication unit 309 such as a network card, modem, wireless communication transceiver, etc. The communication unit 309 allows the electronic device 300 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.
The computing unit 301 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of the computing unit 301 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The computing unit 301 performs the various methods and processes described above, such as the travel processing method of the indoor environment. For example, in some embodiments, the travel processing method of the indoor environment may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 308. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 300 via the ROM 302 and/or the communication unit 309. When the computer program is loaded into the RAM 303 and executed by the computing unit 301, one or more steps of the travel processing method of the indoor environment described above may be performed. Alternatively, in other embodiments, the computing unit 301 may be configured to perform the travel processing method of the indoor environment by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), the internet, and blockchain networks.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The Server can be a cloud Server, also called a cloud computing Server or a cloud host, and is a host product in a cloud computing service system, so as to solve the defects of high management difficulty and weak service expansibility in the traditional physical host and VPS service ("Virtual Private Server", or simply "VPS"). The server may also be a server of a distributed system, or a server incorporating a blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved, and the present disclosure is not limited herein.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

Claims (15)

1. A method for processing the travel of indoor environment comprises the following steps:
acquiring a positioning instruction provided by a user in an indoor environment, wherein the positioning instruction comprises a gesture instruction and a voice instruction;
according to the positioning instruction, positioning position information of a positioning place indicated by the positioning instruction in the indoor environment and positioning description information of the positioning place are obtained;
and performing association processing on the positioning position information and the positioning description information so as to perform automatic proceeding of the indoor environment.
2. The method of claim 1, wherein the obtaining, according to the positioning instruction, positioning location information of a positioning place indicated by the positioning instruction in the indoor environment and positioning description information of the positioning place comprises:
according to the gesture instruction, obtaining positioning position information of a positioning place indicated by the gesture instruction in the indoor environment;
and obtaining the positioning description information of the positioning place according to the voice instruction.
3. The method of claim 2, wherein the obtaining, according to the gesture instruction, location information of a location indicated by the gesture instruction in the indoor environment comprises:
according to the gesture instruction, obtaining direction information pointed by a user finger corresponding to the gesture instruction;
obtaining the relative distance between the positioning place and the user according to the direction information pointed by the user finger and the height information of the user finger from the ground;
and obtaining the positioning position information of the positioning place according to the positioning position information of the user in the indoor environment and the relative distance between the positioning place and the user.
4. The method of any of claims 1-3, wherein the correlating the location information and the location description information for automatic travel of the indoor environment comprises:
performing static association processing on the positioning position information and the positioning description information on an initial mapping map of the indoor environment to construct an updated mapping map of the indoor environment; or
And performing dynamic association processing on the positioning position information and the positioning description information to generate the current travel instruction in the indoor environment.
5. The method of claim 4, wherein prior to statically associating the positioning location information and the positioning description information on an initial mapping map of the indoor environment to construct an updated mapping map of the indoor environment, further comprising:
detecting a surrounding environment in the indoor environment using a sensing device to construct an initial mapping map of the indoor environment; or
An initial mapping map of the indoor environment constructed by a particular terminal device is obtained.
6. The method of claim 4 or 5, wherein the statically associating the positioning location information and the positioning description information on an initial mapping map of the indoor environment to construct an updated mapping map of the indoor environment comprises:
according to the positioning position information, marking the positioning place on the initial mapping map to obtain a map element corresponding to the positioning place on the initial mapping map;
and associating the positioning description information of the positioning place with the map element to construct an updated mapping map of the indoor environment.
7. A travel processing apparatus for an indoor environment, comprising:
the system comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring a positioning instruction provided by a user in an indoor environment, and the positioning instruction comprises a gesture instruction and a voice instruction;
the positioning unit is used for acquiring positioning position information of a positioning place indicated by the positioning instruction in the indoor environment and positioning description information of the positioning place according to the positioning instruction;
and the association unit is used for associating the positioning position information with the positioning description information so as to automatically advance the indoor environment.
8. Device according to claim 7, wherein the positioning unit is, in particular, for
According to the gesture instruction, obtaining positioning position information of a positioning place indicated by the gesture instruction in the indoor environment; and
and obtaining the positioning description information of the positioning place according to the voice instruction.
9. Device according to claim 8, wherein the positioning unit is, in particular, for
According to the gesture instruction, obtaining direction information pointed by a user finger corresponding to the gesture instruction;
obtaining the relative distance between the positioning place and the user according to the direction information pointed by the user finger and the height information of the user finger from the ground; and
and obtaining the positioning position information of the positioning place according to the positioning position information of the user in the indoor environment and the relative distance between the positioning place and the user.
10. The apparatus according to any of claims 7-9, wherein the association unit is specifically configured to
Performing static association processing on the positioning position information and the positioning description information on an initial mapping map of the indoor environment to construct an updated mapping map of the indoor environment; or
And performing dynamic association processing on the positioning position information and the positioning description information to generate the current travel instruction in the indoor environment.
11. The apparatus of claim 10, wherein the associating unit is further configured to associate the received signal with the specific channel of the user equipment
Detecting a surrounding environment in the indoor environment using a sensing device to construct an initial mapping map of the indoor environment; or
An initial mapping map of the indoor environment constructed by a particular terminal device is obtained.
12. The apparatus according to claim 10 or 11, wherein the association unit is specifically configured to
According to the positioning position information, marking the positioning place on the initial mapping map to obtain a map element corresponding to the positioning place on the initial mapping map; and
and associating the positioning description information of the positioning place with the map element to construct an updated mapping map of the indoor environment.
13. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-6.
14. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-6.
15. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1-6.
CN202111284418.6A 2021-11-01 2021-11-01 Indoor environment advancing processing method and device, electronic equipment and readable storage medium Pending CN114281907A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111284418.6A CN114281907A (en) 2021-11-01 2021-11-01 Indoor environment advancing processing method and device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111284418.6A CN114281907A (en) 2021-11-01 2021-11-01 Indoor environment advancing processing method and device, electronic equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN114281907A true CN114281907A (en) 2022-04-05

Family

ID=80868718

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111284418.6A Pending CN114281907A (en) 2021-11-01 2021-11-01 Indoor environment advancing processing method and device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN114281907A (en)

Similar Documents

Publication Publication Date Title
CN114186007A (en) High-precision map generation method and device, electronic equipment and storage medium
CN113436233A (en) Registration method and device of automatic driving vehicle, electronic equipment and vehicle
CN113705390B (en) Positioning method, positioning device, electronic equipment and storage medium
CN113219505B (en) Method, device and equipment for acquiring GPS coordinates for vehicle-road cooperative tunnel scene
CN113932796A (en) High-precision map lane line generation method and device and electronic equipment
CN113177980A (en) Target object speed determination method and device for automatic driving and electronic equipment
US20220307855A1 (en) Display method, display apparatus, device, storage medium, and computer program product
CN114299192B (en) Method, device, equipment and medium for positioning and mapping
CN113899359B (en) Navigation method, device, equipment and storage medium
CN113449687B (en) Method and device for identifying point of interest outlet and point of interest inlet and electronic equipment
CN114266876B (en) Positioning method, visual map generation method and device
CN114281907A (en) Indoor environment advancing processing method and device, electronic equipment and readable storage medium
CN115640372A (en) Method, device, system, equipment and medium for guiding area of indoor plane
CN114518117A (en) Navigation method, navigation device, electronic equipment and medium
CN113360791A (en) Interest point query method and device of electronic map, road side equipment and vehicle
CN113643440A (en) Positioning method, device, equipment and storage medium
CN113190150A (en) Display method, device and storage medium of covering
CN114383600B (en) Processing method and device for map, electronic equipment and storage medium
CN115546348B (en) Robot mapping method and device, robot and storage medium
CN115601561A (en) High-precision map target detection method, device, equipment and medium
CN115346277A (en) Data generation method and device
CN115953354A (en) Method, apparatus and medium for detecting point cloud data deviation in high-precision map
CN114998435A (en) Method and device for determining position of obstacle, electronic equipment and storage medium
CN117611643A (en) Point cloud registration method and device and electronic equipment
CN113343005A (en) Searching method, searching device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination