WO2023124017A1 - 智能设备控制方法、装置、服务器和存储介质 - Google Patents

智能设备控制方法、装置、服务器和存储介质 Download PDF

Info

Publication number
WO2023124017A1
WO2023124017A1 PCT/CN2022/105812 CN2022105812W WO2023124017A1 WO 2023124017 A1 WO2023124017 A1 WO 2023124017A1 CN 2022105812 W CN2022105812 W CN 2022105812W WO 2023124017 A1 WO2023124017 A1 WO 2023124017A1
Authority
WO
WIPO (PCT)
Prior art keywords
scene
smart device
target
new scene
task
Prior art date
Application number
PCT/CN2022/105812
Other languages
English (en)
French (fr)
Inventor
高斌
陈莹
Original Assignee
达闼机器人股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 达闼机器人股份有限公司 filed Critical 达闼机器人股份有限公司
Publication of WO2023124017A1 publication Critical patent/WO2023124017A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B15/00Systems controlled by a computer
    • G05B15/02Systems controlled by a computer electric
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/418Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS] or computer integrated manufacturing [CIM]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/20Pc systems
    • G05B2219/26Pc applications
    • G05B2219/2642Domotique, domestic, home control, automation, smart house
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Definitions

  • the present invention relates to the technical field of artificial intelligence, in particular to an intelligent device control method, device, server and storage medium.
  • a cloud server can be used to remotely control a smart device, and the smart device can be a robot, for example. Based on the environmental data collected and reported by the smart device, the cloud server outputs control commands to control the smart device based on the current tasks to be performed, and sends the control commands to the smart device, so that the smart device performs actions based on the control commands.
  • Embodiments of the present invention provide a smart device control method, device, server, and storage medium, so as to control smart devices to efficiently execute target tasks.
  • an embodiment of the present invention provides a method for controlling a smart device, the method including:
  • the modeling data includes Or at least one of images obtained by shooting the reference scene at different positions, and a three-dimensional scene model corresponding to the reference scene;
  • the smart device is controlled to execute a target task in the new scene.
  • the target task is a navigation task
  • the controlling the smart device to perform a target task in the new scene based on the target reference scene includes:
  • a route for the smart device to perform the navigation task in the new scene is planned.
  • the target task is a scan task, and the scan task refers to a task of creating modeling data corresponding to the new scene;
  • the controlling the smart device to perform a target task in the new scene based on the target reference scene includes:
  • the scanning policy data includes at least any of the following:
  • the travel strategy is used to instruct the smart device to travel on the left, right or middle of the channel.
  • the new scene and the target reference scene are rooms with similar structures and furnishings.
  • the method before comparing and matching the image corresponding to the new scene with the modeling data corresponding to multiple preset reference scenes, the method further includes:
  • the method also includes:
  • the modeling data corresponding to the new scene provided by the manager of the new scene is obtained, and the new scene corresponds to
  • the modeling data is a 3D scene model
  • the smart device is controlled to execute the target task in the new scene.
  • the method also includes:
  • the smart device If there is a target included angle between the traveling angle corresponding to the target manipulation and the parallel line, the smart device is controlled to call back the target included angle to proceed.
  • the method also includes:
  • an embodiment of the present invention provides a smart device control device, including:
  • a photographing module configured to photograph a new scene currently entered by the smart device, to obtain an image corresponding to the new scene
  • a matching module configured to compare and match the image corresponding to the new scene with the modeling data corresponding to a plurality of preset reference scenes, so as to determine a target reference scene matching the new scene, wherein the modeling The data includes at least one of images obtained by shooting the reference scene at different angles or at different positions, and a three-dimensional scene model corresponding to the reference scene;
  • a control module configured to control the smart device to execute a target task in the new scene based on the target reference scene.
  • the target task is a navigation task
  • the control module is used for:
  • a route for the smart device to perform the navigation task in the new scene is planned.
  • the target task is a scan task, and the scan task refers to a task of creating modeling data corresponding to the new scene;
  • the control module is used for:
  • the scanning policy data includes at least any of the following:
  • the travel strategy is used to instruct the smart device to travel on the left, right or middle of the channel.
  • the new scene and the target reference scene are rooms with similar structures and furnishings.
  • the shooting module is also used for:
  • control module is also used for:
  • the modeling data corresponding to the new scene provided by the manager of the new scene is obtained, and the new scene corresponds to
  • the modeling data is a 3D scene model
  • the smart device is controlled to execute the target task in the new scene.
  • control module is also used for:
  • the smart device If there is a target included angle between the traveling angle corresponding to the target manipulation and the parallel line, the smart device is controlled to call back the target included angle to proceed.
  • control module is also used for:
  • an embodiment of the present invention provides a server, which includes a processor and a memory, wherein executable code is stored in the memory, and when the executable code is executed by the processor, the processing The device can at least implement the smart device control method in the first aspect.
  • the embodiment of the present invention provides a non-transitory machine-readable storage medium, the non-transitory machine-readable storage medium stores executable code, when the executable code is executed by the processor of the server When, the processor can at least implement the smart device control method in the first aspect.
  • the smart device when the smart device newly enters a new scene, since there is no corresponding modeling data corresponding to the new scene in the cloud server, it can take a corresponding image for the new scene, and then based on the image and a plurality of preset The data corresponding to the reference scene is compared and matched to find a target reference scene that is similar to the new scene. In turn, the target task can be executed in the new scene with reference to the target reference scene. In this manner, the smart device can efficiently perform target tasks when entering a new scene.
  • FIG. 1 is a schematic flowchart of a smart device control method provided by an embodiment of the present invention
  • FIG. 2 is a schematic diagram of a scenario for manipulating a smart device provided by an embodiment of the present invention
  • FIG. 3 is a schematic diagram of another scenario for manipulating a smart device provided by an embodiment of the present invention.
  • FIG. 4 is a schematic structural diagram of a smart device control device provided by an embodiment of the present invention.
  • FIG. 5 is a schematic structural diagram of a server provided by an embodiment of the present invention.
  • the words “if”, “if” as used herein may be interpreted as “at” or “when” or “in response to determining” or “in response to detecting”.
  • the phrases “if determined” or “if detected (the stated condition or event)” could be interpreted as “when determined” or “in response to the determination” or “when detected (the stated condition or event) )” or “in response to detection of (a stated condition or event)”.
  • An embodiment of the present invention provides a smart device control method, which can be applied to a cloud server.
  • the smart device control method provided by the embodiment of the present invention may include the following steps:
  • the modeling data includes At least one of an image obtained by photographing the reference scene and a 3D scene model corresponding to the reference scene.
  • the aforementioned scenes may be various scenes such as hospitals, hotels, schools, nursing homes, office buildings, factories, and outdoors.
  • the smart device mentioned above may be a device such as a robot.
  • Smart devices can perform different tasks in different scenarios, or provide different services, such as sweeping, cleaning, security, delivery and other services.
  • the cloud server needs to understand the scene to remotely control smart devices to reach designated locations through navigation and obstacle avoidance to provide services. This requires the creation of modeling data corresponding to the scene in the cloud server, through which the travel route of the smart device when performing tasks can be planned.
  • the image capture device installed on the smart device can be turned on, and the image capture device can collect images corresponding to the scene that the smart device is facing in the new scene. For example, when the smart device enters a room, there is a table facing the smart device, and the smart device can take an image of the table.
  • the image corresponding to the new scene can be compared and matched with the modeling data corresponding to multiple preset reference scenes, so as to determine a target reference scene matching the new scene.
  • the modeling data includes data such as images obtained by photographing the reference scene at different angles or at different positions, and a three-dimensional scene model corresponding to the reference scene.
  • the target reference scene matching the new scene may be a scene with a very high similarity to the new scene.
  • the new scene and the target reference scene may be rooms with similar structures and furnishings.
  • the new scene and the target reference scene can be different wards in the same hospital, different guest rooms in the same hotel, different classrooms in the same school, different workshops in the same factory, etc.
  • the new scene and the target reference scene may also be different floors in the same building.
  • the new scene and the target reference scene are characterized by a very high similarity between them.
  • the multiple preset reference scenes in the embodiment of the present invention may be scenes that the smart device or other smart devices have visited before.
  • the smart device arrives at any reference scene, it will scan the reference scene, and the modeling data corresponding to the reference scene can be obtained by scanning the picture.
  • the similarity between the image corresponding to the new scene and the modeling data corresponding to each preset reference scene can be calculated, and the reference scene whose similarity is higher than the preset threshold is determined as the target reference scene matching the new scene.
  • the smart device For example, assuming that the smart device has just scanned the picture of Room 101 in Hotel A, images of each location and corner of Room 101 have been acquired, for example, 10 images.
  • the smart device comes to room 102 of hotel A to scan a picture, and can take an image P of room 102 after entering room 102. Then calculate the similarity between the image P and the 10 images corresponding to Room 101, assuming that the image P' and the image P in the 10 images are taken at different angles and positions close to each other, the image P' The similarity between and image P is higher than the preset threshold. Furthermore, it is determined that room 102 matches room 101.
  • the smart device can be controlled to perform the target task in the new scene based on the target reference scene.
  • the aforementioned target tasks may include navigation tasks or image scanning tasks.
  • the scanning task refers to the task of creating the modeling data corresponding to the new scene.
  • the target task is a navigation task.
  • the process of controlling the smart device to perform the target task in the new scene can be implemented as: planning the smart device to perform navigation in the new scene based on the modeling data corresponding to the target reference scene The route of travel for the task.
  • the smart device since the structure of the different rooms in hotel A, the location of the passage, the location of the bed, the location of the bedside table, the location of the TV cabinet, etc. are similar or the same, and the smart device has been to room 101 to scan the picture, it can Get the modeling data corresponding to room 101. Furthermore, it is also feasible to use the modeling data of room 101 in room 102, so the route for the smart device to perform the navigation task in room 102 can be planned based on the modeling data of room 101.
  • the smart device For example, if the smart device goes to the TV cabinet in room 101 to get a cup, it needs to move forward for 1 meter, then turn right and travel 0.5 meters to reach the front of the TV cabinet. In order to reach the front of the TV cabinet in Room 102, the smart device also needs to move forward for 1 meter and turn right for 0.5 meters.
  • the target task is a scanning task.
  • the process of controlling the smart device to perform the target task in the new scene can be realized as follows: according to the corresponding relationship between the preset reference scene and the scanning strategy data, determine The target refers to the target scanning policy data corresponding to the scene; the smart device is controlled to execute the scanning task based on the target scanning policy data.
  • the image scanning strategy data may at least include any of the following items: traveling speed, traveling angle, traveling route, and traveling strategy.
  • traveling strategy is used to instruct the smart device to travel on the left, right or middle of the channel.
  • the smart device has scanned pictures in the reference scene, so corresponding to the scanning policy data used when performing the task of scanning, it can be recorded and recorded during the process of performing the task of scanning in each reference scene by the smart device.
  • the scanning policy data describes how the smart device executes the scanning task orderly in the reference scene.
  • the smart device performs the scanning task in the target reference scene can perform the scanning task again in the same way in the new scene. For example, if the smart device in room 101 of hotel A performs the scanning task according to the traveling route X, then the smart device in room 102 of hotel A can still perform the scanning task according to the traveling route X.
  • the smart device can also obtain the photos obtained by shooting the reference scene at different angles or at different positions through the terminal held by the photographer in the reference scene. image, and record the route taken by the photographer during the process of capturing the image, as the route in the scanning strategy data corresponding to the reference scene.
  • the terminal may be a mobile phone, a tablet computer, and the like.
  • the photographer can enter the target reference scene similar to the new scene to shoot the reference scene from different angles or in different positions, or the photographer can directly enter the new scene to shoot the new scene from different angles or in different positions to shoot.
  • the modeling data corresponding to the target reference scene can be obtained. After obtaining the modeling data corresponding to the target reference scene, it can be compared and matched with the new scene.
  • the route taken by the photographer can also be recorded, so that the smart device can also refer to the same route during the scanning process. It should be noted that, since the walking speed of the person is quite different from that of the device, parameters such as the walking speed of the photographer may not be used as a reference.
  • the smart device needs to perform the scanning task to obtain the image of the new scene without using the image taken by the photographer.
  • the modeling data corresponding to the new scene provided by the manager of the new scene is obtained, and the modeling data corresponding to the new scene It is a 3D scene model; collect the manipulation behavior data generated when the background personnel control the virtual twin corresponding to the smart device to perform the target task in the 3D scene model; based on the manipulation behavior data, control the smart device to perform the target task in the new scene.
  • the corresponding three-dimensional scene model of the hotel may have been displayed on its official website, so that guests can have a clearer understanding of the hotel and the styles of the rooms that can be selected.
  • the background personnel manipulate the virtual twins to perform the target tasks in the 3D scene model, and record the manipulation behavior data generated during the manipulation.
  • the smart device can be controlled to reproduce the process of executing the target task based on the manipulation behavior data in the real new scene, so as to perform the target task in the real new scene.
  • the three-dimensional scene model can also be a digital twin world.
  • the virtual twin body may collide with obstacles in the 3D scene model, and the wrong manipulation can be corrected in time when a collision occurs. Since the collision does not occur in a real new scene, nor on a real smart device, there will be no wear and tear on the components of the smart device.
  • the simulation of the process of executing the target task by the virtual twin in the 3D scene model it can effectively avoid the collision between the smart device and the actual obstacle in the real new scene, and finally reduce the loss of the smart device and reduce the cost of performing the target task. cost.
  • the method provided by the embodiment of the present invention may further include: during the progress of the smart device performing the target task, detecting a parallel line in the location area where the smart device is currently located that matches the direction of travel corresponding to the target manipulation; Determine the travel angle corresponding to the target manipulation; if there is a target angle between the travel angle corresponding to the target manipulation and the parallel line, control the smart device to call back the target angle to proceed.
  • the parallel lines on both sides shown in the left figure of Figure 2 represent the wall, and the middle of the wall is the aisle.
  • the smart device is moving towards an angle of intersection with the walls on both sides. If it is not corrected in time, the smart device will collide with the left wall after a while. Assuming that the target angle between the travel angle and the left wall is 15°, the travel angle of the smart device can be automatically adjusted back to 15° in the clockwise direction, so that as shown in the right figure of Figure 2, the smart device can move toward a Walk parallel to the walls on both sides.
  • the method provided by the embodiment of the present invention may further include: during the progress of the smart device performing the target task, if receiving an instruction input by the background personnel to control the smart device to turn, detecting whether the smart device is in the direction of travel There is an intersection; if there is an intersection in the traveling direction, then based on the modeling data corresponding to the intersection, plan a traveling route that passes through the intersection and travels in the direction corresponding to the turning instruction.
  • the background personnel can view the process of the smart device executing the target task through the monitoring screen. It is understandable that since the smart device only performs the target task with reference to the target reference scene with a very high degree of similarity, if the positions of some obstacles in the target reference scene are different from those in the new scene, the smart device cannot fully refer to The target reference scene performs the target task and needs to be adjusted in time. At this time, the background personnel can intervene, and the background personnel can send corresponding control instructions to the smart device, such as an instruction to make the smart device turn left or right.
  • a smart device may be provided with various sensors, such as a depth camera, a laser radar, an ultrasonic sensor, a gyroscope, and the like.
  • the above sensors can be used to detect whether there is an intersection in front of the smart device. If there is an intersection, the modeling data corresponding to the intersection will be further determined based on the data collected by the above sensors, and based on the modeling data, a road will be planned to pass through the intersection and move to the right. target route.
  • the smart device will automatically plan a smooth target travel route, and since there is no need to adjust the travel angle or reverse for many times, the target travel route is also the most efficient one passing through the intersection to the right route of travel.
  • the smart device when the smart device newly enters a new scene, since there is no corresponding modeling data corresponding to the new scene in the cloud server, it can take a corresponding image for the new scene, and then based on the image and a plurality of preset The data corresponding to the reference scene is compared and matched to find a target reference scene that is similar to the new scene. In turn, the target task can be executed in the new scene with reference to the target reference scene. In this manner, the smart device can efficiently perform target tasks when entering a new scene.
  • the smart device control device of one or more embodiments of the present invention will be described in detail below. Those skilled in the art can understand that these smart device control devices can be configured by using commercially available hardware components through the steps taught in this solution.
  • Fig. 4 is a schematic structural diagram of a smart device control device provided by an embodiment of the present invention. As shown in Fig. 4, the device includes:
  • a photographing module 41 configured to photograph a new scene currently entered by the smart device, to obtain an image corresponding to the new scene
  • a matching module 42 configured to compare and match the image corresponding to the new scene with the modeling data corresponding to a plurality of preset reference scenes, so as to determine a target reference scene that matches the new scene, wherein the proposed The model data includes at least one of images obtained by shooting the reference scene at different angles or at different positions, and a three-dimensional scene model corresponding to the reference scene;
  • the control module 43 is configured to control the smart device to execute the target task in the new scene based on the target reference scene.
  • the target task is a navigation task
  • the control module 43 is used for:
  • a route for the smart device to perform the navigation task in the new scene is planned.
  • the target task is a scan task, and the scan task refers to a task of creating modeling data corresponding to the new scene;
  • the control module 43 is used for:
  • the scanning policy data includes at least any of the following:
  • the travel strategy is used to instruct the smart device to travel on the left, right or middle of the channel.
  • the new scene and the target reference scene are rooms with similar structures and furnishings.
  • the photographing module 41 is also used for:
  • control module 43 is also used for:
  • the modeling data corresponding to the new scene provided by the manager of the new scene is obtained, and the new scene corresponds to
  • the modeling data is a 3D scene model
  • the smart device is controlled to execute the target task in the new scene.
  • control module 43 is also used for:
  • the smart device If there is a target included angle between the traveling angle corresponding to the target manipulation and the parallel line, the smart device is controlled to call back the target included angle to proceed.
  • control module 43 is also used for:
  • the apparatus shown in FIG. 4 can execute the smart device control method provided in the embodiments shown in FIGS. 1 to 3 .
  • the apparatus shown in FIG. 4 can execute the smart device control method provided in the embodiments shown in FIGS. 1 to 3 .
  • the structure of the smart device control device shown in FIG. 4 above may be implemented as a server.
  • the server may include: a processor 91 and a memory 92 .
  • executable code is stored on the memory 92, when the executable code is executed by the processor 91, the processor 91 can at least implement Intelligent device control method.
  • the server may also include a communication interface 93 for communicating with other devices.
  • an embodiment of the present invention provides a non-transitory machine-readable storage medium, where executable code is stored on the non-transitory machine-readable storage medium, and when the executable code is executed by a processor of the server, The processor can at least implement the smart device control method provided in the embodiment shown in FIG. 1 to FIG. 3 .
  • the smart device control method provided by the embodiment of the present invention can be executed by a certain program/software, which can be provided by the network side, and the server mentioned in the foregoing embodiments can download the program/software to a local non-volatile In the volatile storage medium, and when it needs to execute the aforementioned smart device control method, the program/software is read into the memory by the CPU, and then the program/software is executed by the CPU to realize the smart device provided in the aforementioned embodiments
  • the control method and the execution process please refer to the schematic diagrams in the above-mentioned Fig. 1 to Fig. 3 .

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Automation & Control Theory (AREA)
  • Theoretical Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Manufacturing & Machinery (AREA)
  • Quality & Reliability (AREA)
  • Traffic Control Systems (AREA)
  • Television Signal Processing For Recording (AREA)
  • Studio Devices (AREA)
  • Processing Or Creating Images (AREA)

Abstract

一种智能设备控制方法、装置、服务器和存储介质,智能设备控制方法包括:对智能设备当前进入的新场景进行拍摄,得到新场景对应的图像(S101);将新场景对应的图像和多个预设参考场景对应的建模数据进行比较匹配,以确定出与新场景相匹配的目标参考场景,其中,建模数据包括以不同角度或者在不同位置对参考场景拍摄获得的图像、参考场景对应的三维场景模型中的至少一项(S102);基于目标参考场景,控制智能设备在新场景中执行目标任务(S103)。当智能设备新进入一个新场景时,可以为新场景找到较为相似的目标参考场景,进而参照目标参考场景来在新场景中执行目标任务,使得智能设备在新进入一个新场景时能够高效地执行目标任务。

Description

智能设备控制方法、装置、服务器和存储介质
交叉引用
本申请要求2021年12月29日递交的、申请号为“2021116439292”、发明名称为“智能设备控制方法、装置、服务器和存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明涉及人工智能技术领域,尤其涉及一种智能设备控制方法、装置、服务器和存储介质。
背景技术
相关技术中,可以通过云端服务器实现远程控制智能设备,该智能设备例如可以是机器人。云端服务器根据智能设备采集上报的环境数据结合当前需要执行的任务,输出控制该智能设备的控制指令,将控制指令发送至智能设备,以使得智能设备基于控制指令执行动作。
如果智能设备新进入一个新场景,云端服务器中相应还不存在该新场景对应的建模数据,那么智能设备将如何高效地执行目标任务是一个亟待解决的问题。
发明内容
本发明实施例提供一种智能设备控制方法、装置、服务器和存储介质,用以实现控制智能设备高效地执行目标任务。
第一方面,本发明实施例提供一种智能设备控制方法,该方法包括:
对智能设备当前进入的新场景进行拍摄,得到所述新场景对应的图像;
将所述新场景对应的图像和多个预设参考场景对应的建模数据进行比较匹 配,以确定出与所述新场景相匹配的目标参考场景,其中,所述建模数据包括以不同角度或者在不同位置对所述参考场景拍摄获得的图像、所述参考场景对应的三维场景模型中的至少一项;
基于所述目标参考场景,控制所述智能设备在所述新场景中执行目标任务。
可选地,所述目标任务为导航任务;
所述基于所述目标参考场景,控制所述智能设备在所述新场景中执行目标任务,包括:
基于所述目标参考场景对应的建模数据,规划所述智能设备在所述新场景中执行所述导航任务的行进路线。
可选地,所述目标任务为扫图任务,所述扫图任务是指创建所述新场景对应的建模数据的任务;
所述基于所述目标参考场景,控制所述智能设备在所述新场景中执行目标任务,包括:
根据预设的参考场景和扫图策略数据之间的对应关系,确定所述目标参考场景对应的目标扫图策略数据;
控制所述智能设备基于所述目标扫图策略数据执行所述扫图任务。
可选地,所述扫图策略数据至少包括如下任一项:
行进速度、行进角度、行进路线、行进策略;
其中,所述行进策略用于指示所述智能设备在通道的左侧、右侧或者中间行进。
可选地,所述新场景与所述目标参考场景为结构以及陈设相似的房间。
可选地,在将所述新场景对应的图像和多个预设参考场景对应的建模数据进行比较匹配之前,所述方法还包括:
获取拍摄人员在参考场景中,通过所持终端以不同角度或者在不同位置对所述参考场景拍摄获得的图像,并记录所述拍摄人员在拍摄图像的过程中所行走的路线,作为所述参考场景对应的扫图策略数据中的行进路线。
可选地,所述方法还包括:
如果所述新场景对应的图像和多个预设参考场景对应的建模数据都不匹配,则获取所述新场景的管理人员提供的所述新场景对应的建模数据,所述新场景对应的建模数据为三维场景模型;
采集后台人员控制所述智能设备对应的虚拟孪生体在所述三维场景模型中执行所述目标任务时产生的操控行为数据;
基于所述操控行为数据,控制所述智能设备在所述新场景中执行所述目标任务。
可选地,所述方法还包括:
在所述智能设备执行所述目标任务时行进的过程中,检测所述智能设备当前所处的位置区域中与所述目标操控对应的行进方向相匹配的平行线;
确定所述目标操控对应的行进角度;
若所述目标操控对应的行进角度与所述平行线存在目标夹角,则控制所述智能设备回调所述目标夹角进行行进。
可选地,所述方法还包括:
在所述智能设备执行所述目标任务时行进的过程中,若接收到后台人员输入的控制所述智能设备进行转弯的指令,检测所述智能设备的行进方向上是否存在路口;
若所述行进方向上存在路口,则基于所述路口对应的建模数据,规划通过所述路口向所述转弯的指令对应的转弯方向行进的行进路线。
第二方面,本发明实施例提供一种智能设备控制装置,包括:
拍摄模块,用于对智能设备当前进入的新场景进行拍摄,得到所述新场景对应的图像;
匹配模块,用于将所述新场景对应的图像和多个预设参考场景对应的建模数据进行比较匹配,以确定出与所述新场景相匹配的目标参考场景,其中,所述建模数据包括以不同角度或者在不同位置对所述参考场景拍摄获得的图像、所述参考场景对应的三维场景模型中的至少一项;
控制模块,用于基于所述目标参考场景,控制所述智能设备在所述新场景 中执行目标任务。
可选地,所述目标任务为导航任务;
所述控制模块,用于:
基于所述目标参考场景对应的建模数据,规划所述智能设备在所述新场景中执行所述导航任务的行进路线。
可选地,所述目标任务为扫图任务,所述扫图任务是指创建所述新场景对应的建模数据的任务;
所述控制模块,用于:
根据预设的参考场景和扫图策略数据之间的对应关系,确定所述目标参考场景对应的目标扫图策略数据;
控制所述智能设备基于所述目标扫图策略数据执行所述扫图任务。
可选地,所述扫图策略数据至少包括如下任一项:
行进速度、行进角度、行进路线、行进策略;
其中,所述行进策略用于指示所述智能设备在通道的左侧、右侧或者中间行进。
可选地,所述新场景与所述目标参考场景为结构以及陈设相似的房间。
可选地,所述拍摄模块,还用于:
获取拍摄人员在参考场景中,通过所持终端以不同角度或者在不同位置对所述参考场景拍摄获得的图像,并记录所述拍摄人员在拍摄图像的过程中所行走的路线,作为所述参考场景对应的扫图策略数据中的行进路线。
可选地,所述控制模块,还用于:
如果所述新场景对应的图像和多个预设参考场景对应的建模数据都不匹配,则获取所述新场景的管理人员提供的所述新场景对应的建模数据,所述新场景对应的建模数据为三维场景模型;
采集后台人员控制所述智能设备对应的虚拟孪生体在所述三维场景模型中执行所述目标任务时产生的操控行为数据;
基于所述操控行为数据,控制所述智能设备在所述新场景中执行所述目标 任务。
可选地,所述控制模块,还用于:
在所述智能设备执行所述目标任务时行进的过程中,检测所述智能设备当前所处的位置区域中与所述目标操控对应的行进方向相匹配的平行线;
确定所述目标操控对应的行进角度;
若所述目标操控对应的行进角度与所述平行线存在目标夹角,则控制所述智能设备回调所述目标夹角进行行进。
可选地,所述控制模块,还用于:
在所述智能设备执行所述目标任务时行进的过程中,若接收到后台人员输入的控制所述智能设备进行转弯的指令,检测所述智能设备的行进方向上是否存在路口;
若所述行进方向上存在路口,则基于所述路口对应的建模数据,规划通过所述路口向所述转弯的指令对应的转弯方向行进的行进路线。
第三方面,本发明实施例提供一种服务器,其中包括处理器和存储器,其中,所述存储器上存储有可执行代码,当所述可执行代码被所述处理器执行时,使所述处理器至少可以实现第一方面中的智能设备控制方法。
第四方面,本发明实施例提供了一种非暂时性机器可读存储介质,所述非暂时性机器可读存储介质上存储有可执行代码,当所述可执行代码被服务器的处理器执行时,使所述处理器至少可以实现第一方面中的智能设备控制方法。
采用本发明,当智能设备新进入一个新场景时,由于云端服务器中相应还不存在该新场景对应的建模数据,那么可以为该新场景拍摄对应的图像,然后基于该图像与多个预设参考场景对应的数据进行比较匹配,以找到与新场景较为相似的目标参考场景。进而可以参照目标参考场景来在新场景中执行目标任务。采用这样的方式,可以使得智能设备在新进入一个新场景时能够高效地执行目标任务。
附图说明
为了更清楚地说明本发明实施例中的技术方案,下面将对实施例描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本发明实施例提供的一种智能设备控制方法的流程示意图;
图2为本发明实施例提供的一种操控智能设备的场景示意图;
图3为本发明实施例提供的另一种操控智能设备的场景示意图;
图4为本发明实施例提供的一种智能设备控制装置的结构示意图;
图5为本发明实施例提供的一种服务器的结构示意图。
具体实施方式
为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
在本发明实施例中使用的术语是仅仅出于描述特定实施例的目的,而非旨在限制本发明。在本发明实施例和所附权利要求书中所使用的单数形式的“一种”、“所述”和“该”也旨在包括多数形式,除非上下文清楚地表示其他含义,“多种”一般包含至少两种。
取决于语境,如在此所使用的词语“如果”、“若”可以被解释成为“在……时”或“当……时”或“响应于确定”或“响应于检测”。类似地,取决于语境,短语“如果确定”或“如果检测(陈述的条件或事件)”可以被解释成为“当确定时”或“响应于确定”或“当检测(陈述的条件或事件)时”或“响应于检测(陈述的条件或事件)”。
另外,下述各方法实施例中的步骤时序仅为一种举例,而非严格限定。
本发明实施例提供一种智能设备控制方法,该方法可以应用于云端服务器。如图1所示,本发明实施例提供的智能设备控制方法可以包括以下步骤:
101、对智能设备当前进入的新场景进行拍摄,得到新场景对应的图像。
102、将新场景对应的图像和多个预设参考场景对应的建模数据进行比较匹配,以确定出与新场景相匹配的目标参考场景,其中,建模数据包括以不同角度或者在不同位置对参考场景拍摄获得的图像、参考场景对应的三维场景模型中的至少一项。
103、基于目标参考场景,控制智能设备在新场景中执行目标任务。
上述场景可以是医院、酒店、学校、养老院、办公大楼、工厂、室外等各种场景。
上述智能设备可以是机器人等设备。智能设备可以在不同场景中执行不同的任务,或者说提供不同的服务,比如说扫地、清洁、安保、配送等服务。在提供服务之前,云端服务器需要对场景进行了解,以远程控制智能设备通过导航和避障到达指定地点以提供服务。这就需要在云端服务器中创建该场景对应的建模数据,通过该建模数据可以规划智能设备执行任务时的行进路线。
如果智能设备新进入一个新场景,云端服务器中相应还不存在该新场景对应的建模数据,那么智能设备将如何执行目标任务是本发明实施例需要解决的问题。
在智能设备新进入一个新场景时,可以开启设置在智能设备上的图像拍摄装置,通过该图像拍摄装置可以采集新场景中智能设备正对着的景象对应的图像。比如说,智能设备进入一个房间,正对着智能设备的是一张桌子,智能设备可以拍摄该桌子的图像。
在拍摄获得新场景对应的图像之后,可以将新场景对应的图像和多个预设参考场景对应的建模数据进行比较匹配,以确定出与新场景相匹配的目标参考场景。
其中,建模数据包括以不同角度或者在不同位置对参考场景拍摄获得的图像、参考场景对应的三维场景模型等数据。
需要说明的是,与新场景相匹配的目标参考场景可以是与新场景相似度非常高的场景。可选地,新场景与目标参考场景可以是结构以及陈设相似的房间。
举例来说,新场景与目标参考场景可以是同一医院的不同病房、同一酒店的不同客房、同一学校的不同教室、同一工厂的不同车间等。或者,新场景与目标参考场景也可以是同一栋楼中的不同楼层。
新场景与目标参考场景的特点是它们之间的相似度非常高。
本发明实施例中的多个预设参考场景可以是本智能设备或者其他智能设备曾经到达过的场景。在智能设备到达任一参考场景时,会对参考场景进行扫图,通过扫图可以获得参考场景对应的建模数据。
基于此,可以计算新场景对应的图像与各预设参考场景对应的建模数据之间的相似度,将相似度高于预设阈值的参考场景确定为与新场景相匹配的目标参考场景。
举例来说,假设智能设备刚刚在酒店A的101房间扫图,那么已经获取到该101房间各个位置以及各个角落的图像,比如说有10张图像。当前该智能设备来到酒店A的102房间扫图,在进入102房间后可以拍摄一张102房间的图像P。然后计算该图像P分别和101房间对应的10张图像之间的相似度,假设10张图像中的图像P’与图像P是在不同房间中相接近的角度和位置上拍摄的,图像P’和图像P之间的相似度高于预设阈值。进而,确定102房间与101房间是相匹配的。
当然,除了需要计算图像P与101房间对应的图像之间的相似度之外,如果智能设备还去过其他场景,也需要计算图像P与其他场景对应的图像之间的相似度。但是理论上,由于酒店A的不同房间可能是相似的,因此能够确定出与102房间相匹配的101房间,而其他场景与101房间不相似,因此图像间的相似度较低,102房间无法与其他场景相匹配。
在通过上述介绍的方式确定出目标参考场景之后,可以基于目标参考场景,控制智能设备在新场景中执行目标任务。
上述目标任务可以包括导航任务或者扫图任务。其中,扫图任务是指创建 新场景对应的建模数据的任务。
可选地,目标任务为导航任务,基于目标参考场景,控制智能设备在新场景中执行目标任务的过程可以实现为:基于目标参考场景对应的建模数据,规划智能设备在新场景中执行导航任务的行进路线。
仍接上例,由于酒店A的不同房间的结构、通道的位置、床的位置、床头柜的位置、电视柜的位置等是相近或者相同的,智能设备又曾经去过101房间扫图,因此可以获取到101房间对应的建模数据。进而将101房间的建模数据使用在102房间也是可行的,因此可以基于101房间的建模数据规划智能设备在102房间执行导航任务的行进路线。
比如说,智能设备去101房间的电视柜拿杯子,需要向前行进1米再向右转行进0.5米到达电视柜的前面。而智能设备在102房间要到达电视柜的前面,也需要向前行进1米再向右转行进0.5米。
可选地,目标任务为扫图任务,基于目标参考场景,控制智能设备在新场景中执行目标任务的过程可以实现为:根据预设的参考场景和扫图策略数据之间的对应关系,确定目标参考场景对应的目标扫图策略数据;控制智能设备基于目标扫图策略数据执行扫图任务。
其中,可选地,扫图策略数据可以至少包括如下任一项:行进速度、行进角度、行进路线、行进策略。其中,行进策略用于指示智能设备在通道的左侧、右侧或者中间行进。
可以理解的是,智能设备曾经在参考场景中进行过扫图,因此对应有执行扫图任务时使用的扫图策略数据,可以在智能设备在各参考场景中执行扫图任务的过程中记录并存储对应的扫图策略数据。扫图策略数据描绘了智能设备在参考场景中是如何有序执行扫图任务的。
由于新场景与目标参考场景的相似度非常高,因此智能设备是如何在目标参考场景中执行扫图任务,就可以在新场景中采用相同的方式再次执行扫图任务。比如说,智能设备在酒店A的101房间中是按照行进路线X执行扫图任务的,那么智能设备在酒店A的102房间中仍然可以按照行进路线X执行扫图任 务。
可选地,如果智能设备之前未曾到过与新场景相似的参考场景中执行扫图任务,还可以获取拍摄人员在参考场景中,通过所持终端以不同角度或者在不同位置对参考场景拍摄获得的图像,并记录拍摄人员在拍摄图像的过程中所行走的路线,作为参考场景对应的扫图策略数据中的行进路线。
其中,终端可以是手机、平板电脑等。
可以理解的是,拍摄人员可以进入与新场景相似的目标参考场景中以不同角度或者在不同位置对参考场景进行拍摄,或者拍摄人员也可以直接进入新场景以不同角度或者在不同位置对新场景进行拍摄。
如果是进入目标参考场景进行拍摄的情况,可以获得目标参考场景对应的建模数据。在获得目标参考场景对应的建模数据,就可以与新场景进行比较匹配。
在拍摄人员拍摄目标参考场景的过程中,还可以记录拍摄人员所行走的路线,这样智能设备在扫图的过程中也可以参照相同的路线行进。需要说明的是,由于人的行走速度与设备的行进速度相差较大,所以可以不将拍摄人员的行走速度等参数作为参照。
如果是进入新场景进行拍摄的情况,可以跳过做匹配的步骤。可以直接记录拍摄人员在新场景中所行走的路线,智能设备在扫图的过程中也可以参照相同的路线行进。
需要说明的是,在上述情况中,由于人工拍摄的新场景的图像相比于智能设备自己扫图获得的图像来说,智能设备扫图获得的图像更贴合、便于智能设备的算法需要使用的图像,因此需要智能设备自己执行扫图任务获得新场景的图像而不使用拍摄人员拍摄的图像。
可选地,如果新场景对应的图像和多个预设参考场景对应的建模数据都不匹配,则获取新场景的管理人员提供的新场景对应的建模数据,新场景对应的建模数据为三维场景模型;采集后台人员控制智能设备对应的虚拟孪生体在三维场景模型中执行目标任务时产生的操控行为数据;基于操控行为数据,控制 智能设备在新场景中执行目标任务。
值得注意的是,对应某些新场景来说,对应有建筑图纸、预先建立的三维场景模型等。比如说,对于某些酒店来说,它的官方网站上可能已经展示有酒店对应的三维场景模型,以让客人更清晰的了解酒店以及可以选择的房间的样式。在这样的情况下,可以直接获取新场景对应的已经建立好的三维场景模型,让后将智能设备映射到该三维场景模型中,也就是将智能设备对应的虚拟孪生体添加到该三维场景模型中。然后让后台人员在三维场景模型中操控虚拟孪生体执行目标任务,且记录在操控过程中产生的操控行为数据。最后,可以控制智能设备在真实的新场景中基于操控行为数据复现执行目标任务的过程,以在真实的新场景中执行目标任务。其中,三维场景模型也可以是数字孪生世界。
需要说明的是,在后台人员操控虚拟孪生体的过程中,虚拟孪生体可能会与三维场景模型中的障碍物发生碰撞,在发生碰撞时可以及时对错误的操控进行修正。由于碰撞并不是发生在真实的新场景中,也不是发生在真实的智能设备上,因此不会对智能设备的零部件产生损耗。通过虚拟孪生体在三维场景模型中对执行目标任务过程的模拟,可以有效避免智能设备在真实的新场景中与实际存在的障碍物发生碰撞,最终可以降低智能设备的损耗,降低执行目标任务的成本。
在智能设备基于目标参考场景执行目标任务的过程中,为了进一步提高执行任务的安全性,避免智能设备发生碰撞。可选地,本发明实施例提供的方法还可以包括:在智能设备执行目标任务时行进的过程中,检测智能设备当前所处的位置区域中与目标操控对应的行进方向相匹配的平行线;确定目标操控对应的行进角度;若目标操控对应的行进角度与平行线存在目标夹角,则控制智能设备回调目标夹角进行行进。
如图2所示,图2左图示出的两侧的平行线表示墙面,墙面中间的地方是过道。智能设备正在朝一个与两侧墙面存在交点的行进角度行进,若不及时更正,则再过一段时间之后,智能设备会与左侧的墙面发生碰撞。假设该行进角度与左侧的墙面的目标夹角为15°,可以自动将智能设备的行进角度向顺时针 方向调回15°,这样如图2右图所示,智能设备又可以朝一个与两侧墙面呈平行的方向行进。
可选地,本发明实施例提供的方法还可以包括:在智能设备执行目标任务时行进的过程中,若接收到后台人员输入的控制智能设备进行转弯的指令,检测智能设备的行进方向上是否存在路口;若行进方向上存在路口,则基于路口对应的建模数据,规划通过路口向转弯的指令对应的转弯方向行进的行进路线。
在智能设备参照目标参考场景执行目标任务的过程中,后台人员可以通过监控画面来查看智能设备执行目标任务的过程。可以理解的是,由于智能设备只是参照与之相似度非常高的目标参考场景执行目标任务,如果目标参考场景中某些障碍物的位置与在新场景中的位置不同,那么智能设备不能完全参照目标参考场景执行目标任务,需要及时被调整。此时,后台人员可以介入,后台人员可以给智能设备发出对应的控制指令,比如让智能设备进行左转或者右转的指令。
如图3所示,假设后台人员操控智能设备来到一个路口,并向智能设备发出了右拐指令,这样就可以确定通过路口后需要向右侧方向行进。可以理解的是,智能设备上可以设置有多种传感器,比如说深度摄像头、激光雷达、超声传感器、陀螺仪等。可以通过上述传感器检测智能设备的前方是否确实存在路口,如果存在路口则进一步根据上述传感器采集的数据确定该路口对应的建模数据,并基于该建模数据规划一条通过路口后向右侧方向行进的目标行进路线。
如图3左图所示,如果完全采用人工的方式操控智能设备向右拐歪,则整体拐弯路线不平滑。在较差的情况下,后台人员可能需要多次调整行进角度,或者发现快要撞上拐角处墙面时以再倒退回来再次前进的方式最终操控智能设备通过路口向右侧方向行进。而采用本发明实施例提供的方案,智能设备会自动规划出一条平滑的目标行进路线,且由于无需多次调整行进角度或者进行倒退,该目标行进路线也是效率最高的一条通过路口向右侧方向行进的路线。
采用本发明,当智能设备新进入一个新场景时,由于云端服务器中相应还不存在该新场景对应的建模数据,那么可以为该新场景拍摄对应的图像,然后 基于该图像与多个预设参考场景对应的数据进行比较匹配,以找到与新场景较为相似的目标参考场景。进而可以参照目标参考场景来在新场景中执行目标任务。采用这样的方式,可以使得智能设备在新进入一个新场景时能够高效地执行目标任务。
以下将详细描述本发明的一个或多个实施例的智能设备控制装置。本领域技术人员可以理解,这些智能设备控制装置均可使用市售的硬件组件通过本方案所教导的步骤进行配置来构成。
图4为本发明实施例提供的一种智能设备控制装置的结构示意图,如图4所示,该装置包括:
拍摄模块41,用于对智能设备当前进入的新场景进行拍摄,得到所述新场景对应的图像;
匹配模块42,用于将所述新场景对应的图像和多个预设参考场景对应的建模数据进行比较匹配,以确定出与所述新场景相匹配的目标参考场景,其中,所述建模数据包括以不同角度或者在不同位置对所述参考场景拍摄获得的图像、所述参考场景对应的三维场景模型中的至少一项;
控制模块43,用于基于所述目标参考场景,控制所述智能设备在所述新场景中执行目标任务。
可选地,所述目标任务为导航任务;
所述控制模块43,用于:
基于所述目标参考场景对应的建模数据,规划所述智能设备在所述新场景中执行所述导航任务的行进路线。
可选地,所述目标任务为扫图任务,所述扫图任务是指创建所述新场景对应的建模数据的任务;
所述控制模块43,用于:
根据预设的参考场景和扫图策略数据之间的对应关系,确定所述目标参考场景对应的目标扫图策略数据;
控制所述智能设备基于所述目标扫图策略数据执行所述扫图任务。
可选地,所述扫图策略数据至少包括如下任一项:
行进速度、行进角度、行进路线、行进策略;
其中,所述行进策略用于指示所述智能设备在通道的左侧、右侧或者中间行进。
可选地,所述新场景与所述目标参考场景为结构以及陈设相似的房间。
可选地,所述拍摄模块41,还用于:
获取拍摄人员在参考场景中,通过所持终端以不同角度或者在不同位置对所述参考场景拍摄获得的图像,并记录所述拍摄人员在拍摄图像的过程中所行走的路线,作为所述参考场景对应的扫图策略数据中的行进路线。
可选地,所述控制模块43,还用于:
如果所述新场景对应的图像和多个预设参考场景对应的建模数据都不匹配,则获取所述新场景的管理人员提供的所述新场景对应的建模数据,所述新场景对应的建模数据为三维场景模型;
采集后台人员控制所述智能设备对应的虚拟孪生体在所述三维场景模型中执行所述目标任务时产生的操控行为数据;
基于所述操控行为数据,控制所述智能设备在所述新场景中执行所述目标任务。
可选地,所述控制模块43,还用于:
在所述智能设备执行所述目标任务时行进的过程中,检测所述智能设备当前所处的位置区域中与所述目标操控对应的行进方向相匹配的平行线;
确定所述目标操控对应的行进角度;
若所述目标操控对应的行进角度与所述平行线存在目标夹角,则控制所述智能设备回调所述目标夹角进行行进。
可选地,所述控制模块43,还用于:
在所述智能设备执行所述目标任务时行进的过程中,若接收到后台人员输入的控制所述智能设备进行转弯的指令,检测所述智能设备的行进方向上是否 存在路口;
若所述行进方向上存在路口,则基于所述路口对应的建模数据,规划通过所述路口向所述转弯的指令对应的转弯方向行进的行进路线。
图4所示装置可以执行前述图1至图3所示实施例中提供的智能设备控制方法,详细的执行过程和技术效果参见前述实施例中的描述,在此不再赘述。
在一个可能的设计中,上述图4所示智能设备控制装置的结构可实现为一服务器,如图5所示,该服务器可以包括:处理器91、存储器92。其中,所述存储器92上存储有可执行代码,当所述可执行代码被所述处理器91执行时,使所述处理器91至少可以实现如前述图1至图3所示实施例中提供的智能设备控制方法。
可选地,该服务器中还可以包括通信接口93,用于与其他设备进行通信。
另外,本发明实施例提供了一种非暂时性机器可读存储介质,所述非暂时性机器可读存储介质上存储有可执行代码,当所述可执行代码被服务器的处理器执行时,使所述处理器至少可以实现如前述图1至图3所示实施例中提供的智能设备控制方法。
以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。本领域普通技术人员在不付出创造性的劳动的情况下,即可以理解并实施。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到各实施方式可借助加必需的通用硬件平台的方式来实现,当然也可以通过硬件和软件结合的方式来实现。基于这样的理解,上述技术方案本质上或者说对现有技术做出贡献的部分可以以计算机产品的形式体现出来,本发明可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本发明实施例提供的智能设备控制方法可以由某种程序/软件来执行,该程序/软件可以由网络侧提供,前述实施例中提及的服务器可以将该程序/软件下 载到本地的非易失性存储介质中,并在其需要执行前述智能设备控制方法时,通过CPU将该程序/软件读取到内存中,进而由CPU执行该程序/软件以实现前述实施例中所提供的智能设备控制方法,执行过程可以参见前述图1至图3中的示意。
最后应说明的是:以上实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的精神和范围。

Claims (12)

  1. 一种智能设备控制方法,其特征在于,包括:
    对智能设备当前进入的新场景进行拍摄,得到所述新场景对应的图像;
    将所述新场景对应的图像和多个预设参考场景对应的建模数据进行比较匹配,以确定出与所述新场景相匹配的目标参考场景,其中,所述建模数据包括以不同角度或者在不同位置对所述参考场景拍摄获得的图像、所述参考场景对应的三维场景模型中的至少一项;
    基于所述目标参考场景,控制所述智能设备在所述新场景中执行目标任务。
  2. 根据权利要求1所述的方法,其特征在于,所述目标任务为导航任务;
    所述基于所述目标参考场景,控制所述智能设备在所述新场景中执行目标任务,包括:
    基于所述目标参考场景对应的建模数据,规划所述智能设备在所述新场景中执行所述导航任务的行进路线。
  3. 根据权利要求1所述的方法,其特征在于,所述目标任务为扫图任务,所述扫图任务是指创建所述新场景对应的建模数据的任务;
    所述基于所述目标参考场景,控制所述智能设备在所述新场景中执行目标任务,包括:
    根据预设的参考场景和扫图策略数据之间的对应关系,确定所述目标参考场景对应的目标扫图策略数据;
    控制所述智能设备基于所述目标扫图策略数据执行所述扫图任务。
  4. 根据权利要求3所述的方法,其特征在于,所述扫图策略数据至少包括如下任一项:
    行进速度、行进角度、行进路线、行进策略;
    其中,所述行进策略用于指示所述智能设备在通道的左侧、右侧或者中间行进。
  5. 根据权利要求1所述的方法,其特征在于,所述新场景与所述目标参考 场景为结构以及陈设相似的房间。
  6. 根据权利要求1所述的方法,其特征在于,在将所述新场景对应的图像和多个预设参考场景对应的建模数据进行比较匹配之前,所述方法还包括:
    获取拍摄人员在参考场景中,通过所持终端以不同角度或者在不同位置对所述参考场景拍摄获得的图像,并记录所述拍摄人员在拍摄图像的过程中所行走的路线,作为所述参考场景对应的扫图策略数据中的行进路线。
  7. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    如果所述新场景对应的图像和多个预设参考场景对应的建模数据都不匹配,则获取所述新场景的管理人员提供的所述新场景对应的建模数据,所述新场景对应的建模数据为三维场景模型;
    采集后台人员控制所述智能设备对应的虚拟孪生体在所述三维场景模型中执行所述目标任务时产生的操控行为数据;
    基于所述操控行为数据,控制所述智能设备在所述新场景中执行所述目标任务。
  8. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    在所述智能设备执行所述目标任务时行进的过程中,检测所述智能设备当前所处的位置区域中与所述目标操控对应的行进方向相匹配的平行线;
    确定所述目标操控对应的行进角度;
    若所述目标操控对应的行进角度与所述平行线存在目标夹角,则控制所述智能设备回调所述目标夹角进行行进。
  9. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    在所述智能设备执行所述目标任务时行进的过程中,若接收到后台人员输入的控制所述智能设备进行转弯的指令,检测所述智能设备的行进方向上是否存在路口;
    若所述行进方向上存在路口,则基于所述路口对应的建模数据,规划通过所述路口向所述转弯的指令对应的转弯方向行进的行进路线。
  10. 一种智能设备控制装置,其特征在于,包括:
    拍摄模块,用于对智能设备当前进入的新场景进行拍摄,得到所述新场景对应的图像;
    匹配模块,用于将所述新场景对应的图像和多个预设参考场景对应的建模数据进行比较匹配,以确定出与所述新场景相匹配的目标参考场景,其中,所述建模数据包括以不同角度或者在不同位置对所述参考场景拍摄获得的图像、所述参考场景对应的三维场景模型中的至少一项;
    控制模块,用于基于所述目标参考场景,控制所述智能设备在所述新场景中执行目标任务。
  11. 一种服务器,其特征在于,包括:存储器、处理器;其中,所述存储器上存储有可执行代码,当所述可执行代码被所述处理器执行时,使所述处理器执行如权利要求1-9中任一项所述的智能设备控制方法。
  12. 一种非暂时性机器可读存储介质,其特征在于,所述非暂时性机器可读存储介质上存储有可执行代码,当所述可执行代码被服务器的处理器执行时,使所述处理器执行如权利要求1-9中任一项所述的智能设备控制方法。
PCT/CN2022/105812 2021-12-29 2022-07-14 智能设备控制方法、装置、服务器和存储介质 WO2023124017A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111643929.2 2021-12-29
CN202111643929.2A CN114371632A (zh) 2021-12-29 2021-12-29 智能设备控制方法、装置、服务器和存储介质

Publications (1)

Publication Number Publication Date
WO2023124017A1 true WO2023124017A1 (zh) 2023-07-06

Family

ID=81141749

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/105812 WO2023124017A1 (zh) 2021-12-29 2022-07-14 智能设备控制方法、装置、服务器和存储介质

Country Status (2)

Country Link
CN (1) CN114371632A (zh)
WO (1) WO2023124017A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114371632A (zh) * 2021-12-29 2022-04-19 达闼机器人有限公司 智能设备控制方法、装置、服务器和存储介质
CN115847488B (zh) 2023-02-07 2023-05-02 成都秦川物联网科技股份有限公司 用于协作机器人监测的工业物联网***及控制方法

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003208633A (ja) * 2002-01-10 2003-07-25 Mitsubishi Electric Corp サーバ及びクライアント及び伝送システム及び伝送方法
US20110043625A1 (en) * 2009-08-18 2011-02-24 Wesley Kenneth Cobb Scene preset identification using quadtree decomposition analysis
CN106933227A (zh) * 2017-03-31 2017-07-07 联想(北京)有限公司 一种引导智能机器人的方法以及电子设备
CN108638062A (zh) * 2018-05-09 2018-10-12 科沃斯商用机器人有限公司 机器人定位方法、装置、定位设备及存储介质
CN109906435A (zh) * 2016-11-08 2019-06-18 夏普株式会社 移动体控制装置以及移动体控制程序
CN110457406A (zh) * 2018-05-02 2019-11-15 北京京东尚科信息技术有限公司 地图构建方法、装置和计算机可读存储介质
CN110533553A (zh) * 2018-05-25 2019-12-03 阿里巴巴集团控股有限公司 服务提供方法和装置
CN110569913A (zh) * 2019-09-11 2019-12-13 北京云迹科技有限公司 场景分类器训练方法、装置、场景识别方法及机器人
CN110765525A (zh) * 2019-10-18 2020-02-07 Oppo广东移动通信有限公司 生成场景图片的方法、装置、电子设备及介质
CN110889871A (zh) * 2019-12-03 2020-03-17 广东利元亨智能装备股份有限公司 机器人行驶方法、装置及机器人
CN112183285A (zh) * 2020-09-22 2021-01-05 合肥科大智能机器人技术有限公司 一种变电站巡检机器人的3d点云地图融合方法和***
CN112729321A (zh) * 2020-12-28 2021-04-30 上海有个机器人有限公司 一种机器人的扫图方法、装置、存储介质和机器人
CN112947424A (zh) * 2021-02-01 2021-06-11 国网安徽省电力有限公司淮南供电公司 配网作业机器人自主作业路径规划方法和配网作业***
CN114371632A (zh) * 2021-12-29 2022-04-19 达闼机器人有限公司 智能设备控制方法、装置、服务器和存储介质

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113050649B (zh) * 2021-03-24 2023-11-17 西安科技大学 一种数字孪生驱动的巡检机器人远程控制***及方法
CN113263497A (zh) * 2021-04-07 2021-08-17 新兴际华科技发展有限公司 消防机器人远程智能人机交互方法
CN113240031B (zh) * 2021-05-25 2021-11-19 中德(珠海)人工智能研究院有限公司 全景图像特征点匹配模型的训练方法、装置以及服务器

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003208633A (ja) * 2002-01-10 2003-07-25 Mitsubishi Electric Corp サーバ及びクライアント及び伝送システム及び伝送方法
US20110043625A1 (en) * 2009-08-18 2011-02-24 Wesley Kenneth Cobb Scene preset identification using quadtree decomposition analysis
CN109906435A (zh) * 2016-11-08 2019-06-18 夏普株式会社 移动体控制装置以及移动体控制程序
CN106933227A (zh) * 2017-03-31 2017-07-07 联想(北京)有限公司 一种引导智能机器人的方法以及电子设备
CN110457406A (zh) * 2018-05-02 2019-11-15 北京京东尚科信息技术有限公司 地图构建方法、装置和计算机可读存储介质
CN108638062A (zh) * 2018-05-09 2018-10-12 科沃斯商用机器人有限公司 机器人定位方法、装置、定位设备及存储介质
CN110533553A (zh) * 2018-05-25 2019-12-03 阿里巴巴集团控股有限公司 服务提供方法和装置
CN110569913A (zh) * 2019-09-11 2019-12-13 北京云迹科技有限公司 场景分类器训练方法、装置、场景识别方法及机器人
CN110765525A (zh) * 2019-10-18 2020-02-07 Oppo广东移动通信有限公司 生成场景图片的方法、装置、电子设备及介质
CN110889871A (zh) * 2019-12-03 2020-03-17 广东利元亨智能装备股份有限公司 机器人行驶方法、装置及机器人
CN112183285A (zh) * 2020-09-22 2021-01-05 合肥科大智能机器人技术有限公司 一种变电站巡检机器人的3d点云地图融合方法和***
CN112729321A (zh) * 2020-12-28 2021-04-30 上海有个机器人有限公司 一种机器人的扫图方法、装置、存储介质和机器人
CN112947424A (zh) * 2021-02-01 2021-06-11 国网安徽省电力有限公司淮南供电公司 配网作业机器人自主作业路径规划方法和配网作业***
CN114371632A (zh) * 2021-12-29 2022-04-19 达闼机器人有限公司 智能设备控制方法、装置、服务器和存储介质

Also Published As

Publication number Publication date
CN114371632A (zh) 2022-04-19

Similar Documents

Publication Publication Date Title
US11165959B2 (en) Connecting and using building data acquired from mobile devices
WO2023124017A1 (zh) 智能设备控制方法、装置、服务器和存储介质
US11057561B2 (en) Capture, analysis and use of building data from mobile devices
US9854206B1 (en) Privacy-aware indoor drone exploration and communication framework
Gao et al. Robust RGB-D simultaneous localization and mapping using planar point features
US11632602B2 (en) Automated determination of image acquisition locations in building interiors using multiple data capture devices
EP2571660B1 (en) Mobile human interface robot
WO2017114508A1 (zh) 三维监控***中基于三维重构的交互式标定方法和装置
EP3032369A2 (en) Methods for clearing garbage and devices for the same
JP5267698B2 (ja) 巡回ロボット及び巡回ロボットの自律走行方法
US11790648B2 (en) Automated usability assessment of buildings using visual data of captured in-room images
WO2011146259A2 (en) Mobile human interface robot
GB2527207A (en) Mobile human interface robot
CN111932666B (zh) 房屋三维虚拟图像的重建方法、装置和电子设备
CA3069813C (en) Capturing, connecting and using building interior data from mobile devices
CN105898107A (zh) 一种目标物体抓拍方法及***
CN112053415B (zh) 一种地图构建方法和自行走设备
US20230196684A1 (en) Presenting Building Information Using Video And Building Models
WO2023020224A1 (zh) 导航视频生成、采集方法、装置、服务器、设备及介质
CN114416244A (zh) 信息的显示方法、装置、电子设备及存储介质
KR20240052734A (ko) Cctv 영상의 객체 정보와 3차원 공간 좌표를 매칭한 상황인지 기반 실시간 공간 지능 정보안내 시스템 및 그 방법
JP5552069B2 (ja) 移動物体追跡装置
US20180350216A1 (en) Generating Representations of Interior Space
CN108803383A (zh) 一种设备控制方法、装置、***和存储介质
JP2016197192A (ja) プロジェクションシステムと映像投影方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22913306

Country of ref document: EP

Kind code of ref document: A1