WO2021082565A1 - 一种摄像控制***、方法及移动机器人 - Google Patents

一种摄像控制***、方法及移动机器人 Download PDF

Info

Publication number
WO2021082565A1
WO2021082565A1 PCT/CN2020/105634 CN2020105634W WO2021082565A1 WO 2021082565 A1 WO2021082565 A1 WO 2021082565A1 CN 2020105634 W CN2020105634 W CN 2020105634W WO 2021082565 A1 WO2021082565 A1 WO 2021082565A1
Authority
WO
WIPO (PCT)
Prior art keywords
environment map
mobile robot
camera
client
unit
Prior art date
Application number
PCT/CN2020/105634
Other languages
English (en)
French (fr)
Inventor
眭灵慧
刘鹏
闫瑞君
Original Assignee
深圳市银星智能科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市银星智能科技股份有限公司 filed Critical 深圳市银星智能科技股份有限公司
Publication of WO2021082565A1 publication Critical patent/WO2021082565A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0238Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
    • G05D1/024Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0214Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0221Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0223Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0242Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using non-visible light signals, e.g. IR or UV signals
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0253Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting relative motion information from a plurality of images taken successively, e.g. visual odometry, optical flow
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0257Control of position or course in two dimensions specially adapted to land vehicles using a radar
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
    • G05D1/0285Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle using signals transmitted via a public communication network, e.g. GSM network

Definitions

  • the embodiments of the present invention relate to the technical field of mobile robots, in particular to a camera control system, method, and mobile robot.
  • embodiments of the present invention provide a camera control system, method, and mobile robot, which solve the current privacy security problem of mobile robots and effectively protect user privacy.
  • an embodiment of the present invention provides a camera control system, including: a mobile robot and a client, where the mobile robot includes: a camera unit disposed on the body of the mobile robot for acquiring image data and / Or video data; lidar, set on the body of the mobile robot, used to obtain laser point cloud data; a computing unit, communicatively connected to the camera unit and the lidar, used to obtain laser point cloud data , Constructing an environment map; wherein the client is communicatively connected to the computing unit for receiving the environment map sent by the computing unit, and performing area selection on the environment map, generating a labeled environment map and sending it to The computing unit sends the labeled environment map, so that the computing unit determines to turn off or turn on the camera unit according to the labeled environment map in combination with the current position of the mobile robot.
  • an embodiment of the present invention provides a camera control method, which is applied to the above-mentioned camera control system.
  • the method includes: acquiring laser point cloud data; constructing an environment map based on the laser SLAM algorithm; Segmentation to generate an environmental map containing multiple room areas; sending the environmental map containing multiple room areas to the client; receiving the environmental map with tags sent by the client, where the tags are used to identify private areas;
  • the environment map with tags combined with the current position of the mobile robot, determines to close or open the camera unit.
  • an embodiment of the present invention provides a mobile robot, including: at least one processor; and a memory that is communicatively connected with the at least one processor; wherein the memory stores the memory that can be executed by the at least one processor.
  • the instructions are executed by the at least one processor, so that the at least one processor can execute the aforementioned camera control method.
  • an embodiment of the present invention provides a non-volatile computer-readable storage medium, the non-volatile computer-readable storage medium stores computer-executable instructions, and the computer-executable instructions are used to make a server execute The above-mentioned imaging control method.
  • the embodiment of the present invention provides a camera control system, including: a mobile robot and a client, wherein the mobile robot includes: a camera unit, which is arranged at all The fuselage of the mobile robot is used to obtain image data and/or video data; the lidar is arranged on the fuselage of the mobile robot and used to obtain laser point cloud data; the computing unit is communicatively connected to the camera unit and the camera. The lidar is used to construct an environment map according to the laser point cloud data; wherein, the client is communicatively connected to the computing unit, and is used to receive the environment map sent by the computing unit, and compare the environment map to the environment map.
  • the mobile robot includes: a camera unit, which is arranged at all The fuselage of the mobile robot is used to obtain image data and/or video data; the lidar is arranged on the fuselage of the mobile robot and used to obtain laser point cloud data; the computing unit is communicatively connected to the camera unit and the camera. The lidar is used to construct an environment map according to the laser point cloud data;
  • Perform area selection generate a labeled environment map and send the labeled environment map to the computing unit, so that the computing unit combines the current environment map with the mobile robot according to the labeled environment map.
  • Position determine to close or open the camera unit.
  • FIG. 1 is a schematic diagram of an application environment provided by an embodiment of the present invention.
  • FIG. 2 is a schematic structural diagram of a camera control system provided by an embodiment of the present invention.
  • FIG. 3 is a schematic structural diagram of another camera control system provided by an embodiment of the present invention.
  • FIG. 4 is a schematic flowchart of a camera control method provided by an embodiment of the present invention.
  • FIG. 5 is a detailed flowchart of step S30 in FIG. 4;
  • FIG. 6 is a detailed flowchart of step S60 in FIG. 4;
  • FIG. 7 is a schematic structural diagram of a camera control device provided by an embodiment of the present invention.
  • Fig. 8 is a schematic structural diagram of a mobile robot provided by an embodiment of the present invention.
  • mobile robots include: cleaning robots, service robots, remote monitoring robots, sweeping robots and other robots. Since most mobile robots are equipped with cameras to monitor the indoor environment, the cameras are always in working state after being started. When a mobile robot enters a private area, obtaining a video or image in the private area may easily cause the user's privacy to be leaked, leading to privacy and security issues.
  • the embodiments of the present invention provide a camera control system, method, and mobile robot to effectively protect user privacy.
  • FIG. 1 is a schematic diagram of an application environment provided by an embodiment of the present invention.
  • the mobile robot, the client, and the server are connected through network communication, where the network includes a wired network and/or a wireless network.
  • the network includes wireless networks such as 2G, 3G, 4G, 5G, wireless local area network, Bluetooth, etc., and may also include wired networks such as serial cables and network cables.
  • the mobile robot includes, but is not limited to, robots such as cleaning robots, service robots, remote monitoring robots, and sweeping robots.
  • the client includes but is not limited to:
  • Mobile communication equipment This type of equipment is characterized by mobile communication functions, and its main goal is to provide voice and data communications.
  • electronic devices include: smart phones (such as iPhone), multimedia phones, functional phones, and low-end phones.
  • Mobile personal computer equipment This type of equipment belongs to the category of personal computers, has calculation and processing functions, and generally also has mobile Internet features.
  • electronic devices include: PDA, MID and UMPC devices, such as iPad.
  • Portable entertainment equipment This type of equipment can display and play video content, and generally also has mobile Internet features. Such devices include: video players, handheld game consoles, as well as smart toys and portable car navigation devices.
  • server includes but is not limited to:
  • the general tower server case is similar to our commonly used PC case, while the large tower case is much larger. Generally speaking, there is no fixed standard for the external dimensions.
  • the blade server is a HAHD (High Availability High Density) low-cost server platform, which is specially designed for special application industries and high-density computer environments.
  • Each "blade” is actually a system motherboard , Similar to independent servers. In this mode, each motherboard runs its own system, serving different designated user groups, and is not related to each other. However, the system software can be used to assemble these motherboards into a server cluster. In the cluster mode, all motherboards can be connected to provide a high-speed network environment, can share resources, and serve the same user group.
  • Cloud server (Elastic Compute Service, ECS) is a simple, efficient, safe and reliable computing service with elastically scalable processing capabilities. Its management method is simpler and more efficient than physical servers. Users can quickly create or release any number of cloud servers without having to purchase hardware in advance.
  • the distributed storage of cloud servers is used to integrate a large number of servers into a supercomputer, providing a large amount of data storage and processing services.
  • Distributed file system and distributed database allow access to common storage resources and realize IO sharing of application data files.
  • Virtual machines can break through the limitations of a single physical machine, dynamically adjust and allocate resources to eliminate single points of failure of servers and storage devices, and achieve high availability.
  • FIG. 2 is a schematic structural diagram of a camera control system according to an embodiment of the present invention.
  • the camera control system 100 includes a camera unit 10, a lidar 20, a computing unit 30, and a client 40.
  • the computing unit 30 is connected to the camera unit 10, the lidar 20, and the client, respectively. ⁇ 40.
  • the camera unit 10 is arranged on the body of the mobile robot 100 and is used to obtain image data and/or video data;
  • the camera unit 10 is communicatively connected to the computing unit 30, and the camera unit 10 is provided on the body of the mobile robot 100 for acquiring image data and/or image data within the coverage of the camera unit 10 Video data, for example: acquiring image data and/or video data in a certain confined space, and sending the acquired image data and/or video data to the computing unit 30.
  • the camera unit 10 includes camera devices such as an infrared camera, a night vision camera, a web camera, a digital camera, a high-definition camera, a 4K camera, and an 8K high-definition camera.
  • the lidar 20 is arranged on the fuselage of the mobile robot 100, for example, the lidar 20 is arranged on a mobile chassis on the fuselage of the mobile robot 100, and the lidar is used to obtain a laser point cloud data;
  • the lidar 20 is communicatively connected to the computing unit 30 and is arranged on the body of the mobile robot 100.
  • the lidar 20 is used to obtain laser point cloud data within the monitoring range.
  • the mobile chassis on the fuselage is provided with a communication module, and the laser point cloud data obtained by the lidar is sent to the calculation unit 30 through the communication module.
  • the mobile chassis includes a robot mobile chassis such as an all-round universal chassis and an arched mobile chassis.
  • the calculation unit 30 is communicatively connected to the camera unit 10, the lidar 20, and the client 40, and is used to obtain the image data and/or video data obtained by the camera unit 10, and obtain all According to the laser point cloud data obtained by the laser radar of the laser radar 20, an environment map is constructed according to the laser point cloud data.
  • the calculation unit 30 uses the laser SLAM algorithm to calculate the laser point cloud data of the monitored area to construct an environment map.
  • the laser SLAM algorithm includes methods such as particle filtering and graph optimization
  • the calculation unit 30 includes a circuit board with calculation capabilities, such as a PCB circuit board, or the calculation unit 30 includes a processor , Such as: CPU, GPU and other processors.
  • calculation unit 30 is also used to control the movement of the mobile robot, for example, by controlling the mobile chassis on the body of the mobile robot, so as to control the movement of the mobile robot.
  • the client 40 is communicatively connected to the computing unit 30, and is configured to receive the environment map sent by the computing unit 30, select an area of the environment map, generate an environment map with tags, and send the environment map to the The computing unit 30 sends the labeled environment map, so that the computing unit 30 determines to turn off or turn on the camera unit 10 according to the labeled environment map in combination with the current position of the mobile robot 100.
  • the client 40 includes, but is not limited to, mobile communication equipment, mobile personal computer equipment, portable entertainment equipment and other electronic devices, wherein the client 40 is installed with an application program APP, and the user can use the
  • the application APP receives the environment map sent by the computing unit 30, and the user can also send a command to the computing unit 30 through the application APP, so that the computing unit 30 controls the mobile robot 100 to execute the command, For example: send a standby command to the calculation unit 30 to make the calculation unit 30 control the mobile robot 100 to enter the standby state, or send a camera unit shutdown command to the calculation unit 30 to make the calculation unit 30
  • the camera unit 10 controlling the mobile robot 100 is turned off.
  • the application APP of the client receives the environment map sent by the computing unit 30, so that the user can select the area of the environment map through the application APP, and the selected area is regarded as a private area and automatically Add tags to the private area to generate a tagged environment map, and send the tagged environment map to the computing unit 30, so that the computing unit 30 can use the labeled environment map
  • it is determined to close or open the camera unit 10 for example: if the current position of the mobile robot 100 is in the privacy zone with a tag, the camera unit 10 is controlled Turn off, if the current position of the mobile robot 100 is in an area without a label, the camera unit 10 is controlled to turn on.
  • FIG. 3 is a schematic structural diagram of another camera control system according to an embodiment of the present invention.
  • the mobile robot 100 includes: a camera unit 10, a lidar 20, a computing unit 30, a client 40, a server 50, a communication module 60, and a voice recognition module 70, wherein the computing unit 30 is connected to each The camera unit 10, the laser radar 20, the communication module 60 and the voice recognition module 70 are described.
  • the communication module 60 is connected to the server 50, and the server 50 is connected to the client 40.
  • the camera unit 10 is communicatively connected to the computing unit 30, and the camera unit 10 is provided on the body of the mobile robot 100 for acquiring image data and/or image data within the coverage of the camera unit 10 Video data, for example: acquiring image data and/or video data in a certain confined space, and sending the acquired image data and/or video data to the computing unit 30.
  • the camera unit 10 includes camera devices such as an infrared camera, a night vision camera, a web camera, a digital camera, a high-definition camera, a 4K camera, and an 8K high-definition camera.
  • the lidar 20 is communicatively connected to the computing unit 30 and is arranged on the body of the mobile robot 100, for example, a mobile chassis arranged on the body of the mobile robot 100, and the lidar is used to obtain For laser point cloud data within the monitoring range, the mobile chassis is provided with a communication module, and the laser point cloud data obtained by the laser radar is sent to the computing unit 30 through the communication module of the mobile chassis.
  • the mobile chassis includes a robot mobile chassis such as an all-round universal chassis and a waist-arch mobile chassis
  • the lidar 20 includes radars such as pulse lidar and continuous wave lidar.
  • the calculation unit 30 is communicatively connected to the camera unit 10, the lidar 20, and the communication module 60, and is used to obtain the image data and/or video data obtained by the camera unit 10, and obtain The laser point cloud data obtained by the laser radar is used to construct an environment map according to the laser point cloud data.
  • the calculation unit 30 uses the laser SLAM algorithm to calculate the laser point cloud data of the monitored area to construct an environment map.
  • the calculation unit 30 sends the environment map to the communication module 60, so that the communication module 60 sends the environment map to the server 50.
  • the laser SLAM algorithm includes methods such as particle filtering and graph optimization
  • the calculation unit 30 includes a circuit board with calculation capabilities, such as a PCB circuit board, or the calculation unit 30 includes a processor , Such as: CPU, GPU and other processors, or one of Microcontroller Unit (MCU), Field-Programmable Gate Array (FPGA), System on Chip (SoC) Or multiple combinations.
  • MCU Microcontroller Unit
  • FPGA Field-Programmable Gate Array
  • SoC System on Chip
  • the calculation unit 30 includes a storage module including but not limited to: FLASH flash memory, NAND flash memory, vertical NAND flash memory (VNAND), NOR flash memory, resistance random access memory (RRAM), One or more of magnetoresistive random access memory (MRAM), ferroelectric random access memory (FRAM), spin transfer torque random access memory (STT-RAM) and other devices.
  • FLASH flash memory NAND flash memory
  • VNAND vertical NAND flash memory
  • RRAM resistance random access memory
  • MRAM magnetoresistive random access memory
  • FRAM ferroelectric random access memory
  • STT-RAM spin transfer torque random access memory
  • the client 40 is communicatively connected to the server 50, and is configured to receive the environmental map sent by the server 50, select the area of the environmental map, generate a labeled environmental map, and send it to the server 50 send the labeled environment map, so that the server 50 sends the labeled environment map to the communication module 60, so that the communication module 60 sends the labeled environment map It is forwarded to the calculation unit 30, so that the calculation unit 30 determines to close or open the camera unit 10 according to the labeled environment map and the current position of the mobile robot 100.
  • the server 50 is communicatively connected to the communication module 60 and the client 40, and is configured to receive the environment map sent by the communication module 60 and send the environment map to the client 40, so The server 50 is further configured to receive the labeled environment map sent by the client 40, and send the labeled environment map to the communication module 60.
  • the server 50 includes storage Module, the storage module can be used to store the environment map and the labeled environment map.
  • the server 50 includes, but is not limited to, a tower server, a rack server, a blade server, and a cloud server.
  • the communication module 60 is communicatively connected to the computing unit 30 and the server 50, and the communication module 60 is used for forwarding and receiving information, for example: forwarding the environment map sent by the computing unit 30, and receiving the server 50 Send a tagged environment map.
  • the communication module 60 can realize communication with the Internet and the Internet.
  • the communication module 60 includes but is not limited to WIFI module, ZigBee module, NB_IoT module, 4G Module, 5G module, Bluetooth module and other communication units.
  • the voice recognition module 70 is communicatively connected to the calculation unit 30, and is configured to obtain voice information, generate a control code according to the voice information, and send the control code to the calculation unit 30 to The calculation unit 30 is caused to generate a control instruction according to the control code, so as to control the turning on or turning off of the imaging unit 10.
  • the voice recognition module 70 is communicatively connected to the client 40, and is used to receive the voice information sent by the client 40 and generate control codes according to the voice information. For example, the voice recognition module obtains all the voice information.
  • the voice information is output as a binary code according to a preset protocol, where the preset protocol may be English letters corresponding to the syllable corresponding to the voice information, and the syllable corresponding to the syllable The English letters are parsed into binary codes, and the binary codes are transmitted to the calculation unit.
  • the preset protocol may be English letters corresponding to the syllable corresponding to the voice information, and the syllable corresponding to the syllable
  • the English letters are parsed into binary codes, and the binary codes are transmitted to the calculation unit.
  • the speech recognition module recognizes "open” as “kaiqi”, and Convert "kaiqi” into a binary code, where each English letter corresponds to a unique binary code, so as to obtain the binary code corresponding to the "open”, when the calculation unit obtains the binary code sent by the speech recognition module
  • a preset communication protocol for example: TCP/IP protocol, UDP protocol and other communication protocols
  • a control instruction corresponding to the binary code is generated to control the switch of the camera unit, wherein the preset communication
  • the protocol can also be a custom communication protocol, for example: 0x11 means the camera is turned on, 0x00 means the camera is turned off.
  • the voice recognition module may include a voice library, which includes phrases or sentences preset by the user, such as phrases or sentences such as turn on, turn off, turn on, turn off, etc., each of the voice database
  • the phrase or sentence corresponds to the corresponding binary code.
  • the voice recognition module receives the voice information sent by the client, it recognizes the phrase or sentence in the voice information, and compares the phrase or sentence in the voice information with the voice The phrase or sentence in the speech library in the recognition module is matched.
  • the binary code corresponding to the phrase or sentence will be automatically obtained, thereby reducing the conversion of speech information into binary code Time to improve the speed of speech recognition.
  • a camera control system which includes: a camera unit arranged on the body of the mobile robot for acquiring image data and/or video data; a mobile chassis arranged on the mobile robot The fuselage of the mobile chassis, wherein the mobile chassis is equipped with a lidar, and the lidar is used to obtain laser point cloud data; and a computing unit, which is communicatively connected to the camera unit and the mobile chassis, is used to construct an environmental map according to the Client, communicatively connected to the computing unit, used to receive the environment map sent by the computing unit, select the region of the environment map, generate a labeled environment map, and send the belt to the computing unit A tagged environment map, so that the computing unit determines to turn off or turn on the camera unit according to the tagged environment map in combination with the current position of the mobile robot.
  • the present invention can effectively protect the privacy of the user.
  • FIG. 4 is a schematic flowchart of a camera control method according to an embodiment of the present invention.
  • the camera control method includes:
  • Step S10 Obtain laser point cloud data
  • the camera control method is applied to a mobile robot, the mobile robot includes a mobile chassis, the mobile chassis is provided with a lidar, and the lidar is used to obtain the laser point cloud in the monitoring area of the mobile robot data.
  • Step S20 construct an environment map based on the laser SLAM algorithm
  • the laser SLAM algorithm (SLAM, Simultaneous Localization and Mapping), includes: Kalman filter, particle filter, and map optimization methods, and the environment map is constructed through the laser SLAM algorithm.
  • Step S30 Perform room segmentation on the environment map to generate an environment map containing multiple room areas
  • the environment map is divided into rooms by a watershed algorithm to generate an environment map containing multiple room areas.
  • FIG. 5 is a detailed flowchart of step S30 in FIG. 4;
  • performing room segmentation on the environment map to generate an environment map containing multiple room areas includes:
  • Step S31 Remove unstructured obstacles in the environment map, perform gray-scale processing on the environment map, and generate a pre-processed gray-scale map;
  • Step S32 filtering the preprocessed gray map, and performing edge detection on the filtered gray map
  • the preprocessed grayscale map is filtered, for example, Kalman filtering and particle filtering are performed on the preprocessed grayscale map, and edge detection is performed on the filtered grayscale map.
  • Step S33 Find the outline of the map, and number the outlines of different areas
  • the map contour is searched.
  • the filtered grayscale map includes contours of a plurality of different regions, and the contours of the different regions are numbered. Each contour has its own unique number, which is equivalent to the filtered grayscale map. Water injection points are set on the grayscale map of, and the number of the water injection points is equal to the number of the contours.
  • Step S34 Generate a closed contour according to the similarity of adjacent pixels, and segment the environment map according to the closed contour.
  • the pixels with similar spatial positions and similar gray values are connected to each other to form a new closed contour, and then the map is segmented according to the processed area contour.
  • Step S40 Send the environment map including multiple room areas to the client;
  • the computing unit sends the environment map containing multiple room areas to the server through the communication module, so that the server sends the environment map containing multiple room areas to the client, or the computing The unit directly sends the environment map containing multiple room areas to the client, that is, the computing unit sends the divided environment map to the server through the communication module, so that the server can convert the divided environment map Sent to the client, or the computing unit directly sends the divided environment map to the client.
  • Step S50 Receive a labeled environment map sent by the client, where the label is used to identify a private area;
  • Area selection will be performed on the divided environment map, that is, a private area in the environment map is selected, and after the private area in the environment map is selected, a label will be set for the private area, or, yes All areas are labeled. For example, 0 represents a non-private area, and 1 represents a private area. Therefore, all private areas selected by the user are set to label 1, and other areas are set to label 0.
  • the client then transmits the labeled environment map to the server via the Internet or the Internet, so that the server sends the labeled environment map to the communication module, and the communication module transmits it to the computing unit, or,
  • the client terminal directly sends the environment map with the label to the computing unit.
  • Step S60 According to the labeled environment map and the current position of the mobile robot, it is determined to turn off or turn on the camera unit.
  • the labeled environment map has a label of a private area and/or a label of a non-private area, and the computing unit determines whether there is a private area according to the label of the private area.
  • FIG. 6 is a detailed flowchart of step S60 in FIG. 4;
  • the method of determining to close or open the camera unit according to the labeled environment map in combination with the current position of the mobile robot includes:
  • Step S61 Output the current position of the mobile robot according to the laser SLAM algorithm
  • Step S62 Determine whether the current position of the mobile robot is in the private area in the labeled environment map; if yes, go to step S621: turn off the camera unit; if not, go to step S622: turn on the Camera unit.
  • Step S621 Turn off the camera unit
  • Step S622 Turn on the camera unit
  • the method includes: obtaining laser point cloud data; constructing an environment map based on the laser SLAM algorithm; The environment map divides the room to generate an environment map containing multiple room areas; sends the environment map containing multiple room areas to the client; receives the environment map with tags sent by the client, and the tags are used to identify privacy Area; according to the labeled environment map, combined with the current position of the mobile robot, determine to close or open the camera unit.
  • the present invention can effectively protect the privacy of the user.
  • FIG. 7 is a schematic structural diagram of a camera control device according to an embodiment of the present invention.
  • the camera control device 70 includes:
  • the point cloud data acquisition unit 71 is used to acquire laser point cloud data
  • the environment map construction unit 72 is used to construct an environment map based on the laser SLAM algorithm
  • the room area segmentation unit 73 is configured to perform room segmentation on the environment map to generate an environment map containing multiple room areas;
  • the environment map sending unit 74 is configured to send the environment map including multiple room areas to the client;
  • the environmental map receiving unit 75 is configured to receive a labeled environmental map sent by the client, where the label is used to identify a private area;
  • the camera unit switch unit 76 is used for determining to close or open the camera unit according to the labeled environment map and the current position of the mobile robot.
  • the room area dividing unit 73 is specifically configured to:
  • a closed contour is generated, and the environment map is segmented according to the closed contour.
  • the camera unit switch unit 76 is specifically used for:
  • the camera unit is turned on.
  • a camera control device including: a point cloud data acquisition unit for acquiring laser point cloud data; an environment map construction unit for building an environment map based on the laser SLAM algorithm; room area segmentation A unit for room segmentation of the environment map to generate an environment map containing multiple room areas; an environment map sending unit for sending the environment map containing multiple room areas to the client; an environment map receiving unit, Used to receive a tagged environment map sent by the client, the tag is used to identify a private area; the camera unit switch unit is used to determine the current position of the mobile robot according to the tagged environment map Turn off or turn on the camera unit.
  • FIG. 8 is a schematic structural diagram of a mobile robot according to an embodiment of the present invention.
  • the mobile robot 80 includes one or more processors 81 and a memory 82.
  • processors 81 are taken as an example in FIG. 8.
  • the processor 81 and the memory 82 may be connected through a bus or in other ways. In FIG. 8, the connection through a bus is taken as an example.
  • the memory 82 can be used to store non-volatile software programs, non-volatile computer-executable programs, and modules, such as those corresponding to a camera control method in the embodiments of the present application.
  • Unit for example, each unit described in FIG. 7
  • the processor 81 executes various functional applications and data processing of the camera control method by running non-volatile software programs, instructions, and modules stored in the memory 82, that is, implements the camera control method of the foregoing method embodiment and the foregoing device implementation The functions of each module and unit of the example.
  • the memory 82 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other non-volatile solid-state storage devices.
  • the memory 82 may optionally include memories remotely provided with respect to the processor 81, and these remote memories may be connected to the processor 81 via a network. Examples of the aforementioned networks include, but are not limited to, the Internet, corporate intranets, local area networks, mobile communication networks, and combinations thereof.
  • the module is stored in the memory 82, and when executed by the one or more processors 81, the camera control method in any of the above method embodiments is executed, for example, the steps shown in FIG. 4 described above are executed ; Can also realize the function of each module or unit described in Figure 7.
  • the electronic devices of the embodiments of the present application exist in various forms, and perform the steps shown in FIG. 4 described above; when the functions of the units described in FIG. 7 can also be realized, they include but are not limited to: cleaning robots, service robots , Remote monitoring robots, sweeping robots and other robots.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Electromagnetism (AREA)
  • Optics & Photonics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Manipulator (AREA)

Abstract

一种摄像控制***、方法及移动机器人(100),摄像控制***包括:移动机器人(100)以及客户端(40),其中,移动机器人(100)包括:摄像单元(10),用于获取图像数据和/或视频数据;激光雷达(20),用于获取激光点云数据;计算单元(30),通信连接摄像单元(10)以及激光雷达(20),用于根据激光点云数据,构建环境地图;其中,客户端(40),通信连接计算单元(30),用于生成带有标签的环境地图,以使计算单元(30)根据带有标签的环境地图,结合移动机器人(100)的当前位置,确定关闭或打开摄像单元(10)。通过对环境地图进行标签,进而控制摄像单元(10)的开关,能够有效实现对用户隐私的保护。

Description

一种摄像控制***、方法及移动机器人
本发明要求2019年10月30日向中国国家知识产权局递交的申请号为201911043635.9,发明名称为“一种摄像控制***、方法及移动机器人”的在先申请的优先权,上述在先申请的内容以引入的方式并入本文本中。
技术领域
本发明实施方式涉及移动机器人技术领域,特别是涉及一种摄像控制***、方法及移动机器人。
背景技术
随着技术的发展和人们生活水平的提高,诸如清洁机器人、服务机器人、远程监控机器人、扫地机器人等移动机器人逐渐进入人们的生活中。由于这些移动机器人多数安装有摄像头,以便实现室内环境的监控,但是,由于摄像头在启动之后一直处于工作的状态,会造成摄像头持续录像导致的隐私问题,例如:当移动机器人进入卧室、卫生间等私密区域时,存在用户隐私泄露的风险。
基于此,当前亟需解决移动机器人的隐私安全问题。
发明内容
为了解决上述技术问题,本发明实施例提供一种摄像控制***、方法及移动机器人,解决目前移动机器人的隐私安全问题,有效实现对用户隐私的保护。
为了解决上述技术问题,本发明实施例提供以下技术方案:
第一方面,本发明实施例提供一种摄像控制***,包括:移动机器人以及客户端,其中,所述移动机器人包括:摄像单元,设置于所述移动机器人的机身,用于获取图像数据和/或视频数据;激光雷达,设置于所述移动机器人的机身,用于获取激光点云数据;计算单元,通信连接所述摄像单元以及所述激光雷达,用于根据所述激光点云数据,构建环境地图;其中,所述客户端,通信连接所述计算单元,用于接收所述计算单元发送的环境地图,并对所述环境 地图进行区域选择,生成带有标签的环境地图并向所述计算单元发送所述带有标签的环境地图,以使所述计算单元根据所述带有标签的环境地图,结合所述移动机器人的当前位置,确定关闭或打开所述摄像单元。
第二方面,本发明实施例提供一种摄像控制方法,应用于上述的摄像控制***,所述方法包括:获取激光点云数据;基于激光SLAM算法,构建环境地图;对所述环境地图进行房间分割,生成包含多个房间区域的环境地图;向客户端发送所述包含多个房间区域的环境地图;接收客户端发送的带有标签的环境地图,所述标签用于标识私密区域;根据所述带有标签的环境地图,结合所述移动机器人的当前位置,确定关闭或打开所述摄像单元。
第三方面,本发明实施例提供一种移动机器人,包括:至少一个处理器;和与所述至少一个处理器通信连接的存储器;其中,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行上述的摄像控制方法。
第四方面,本发明实施例提供一种非易失性计算机可读存储介质,所述非易失性计算机可读存储介质存储有计算机可执行指令,所述计算机可执行指令用于使服务器执行上述的摄像控制方法。
本发明实施方式的有益效果是:区别于现有技术的情况,本发明实施方式提供一种摄像控制***,包括:移动机器人以及客户端,其中,所述移动机器人包括:摄像单元,设置于所述移动机器人的机身,用于获取图像数据和/或视频数据;激光雷达,设置于所述移动机器人的机身,用于获取激光点云数据;计算单元,通信连接所述摄像单元以及所述激光雷达,用于根据所述激光点云数据,构建环境地图;其中,所述客户端,通信连接所述计算单元,用于接收所述计算单元发送的环境地图,并对所述环境地图进行区域选择,生成带有标签的环境地图并向所述计算单元发送所述带有标签的环境地图,以使所述计算单元根据所述带有标签的环境地图,结合所述移动机器人的当前位置,确定关闭或打开所述摄像单元。通过对环境地图进行标签,进而控制摄像单元的开关,本发明能够有效实现对用户隐私的保护。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的变形形式。
图1是本发明实施例提供的一种应用环境的示意图;
图2是本发明实施例提供的一种摄像控制***的结构示意图;
图3是本发明实施例提供的另一种摄像控制***的结构示意图;
图4是本发明实施例提供的一种摄像控制方法的流程示意图;
图5是图4中的步骤S30的细化流程图;
图6是图4中的步骤S60的细化流程图;
图7是本发明实施例提供的一种摄像控制装置的结构示意图;
图8是本发明实施例提供的一种移动机器人的结构示意图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行详细地描述,显然,所描述的实施例仅仅是本发明的一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
目前,移动机器人,包括:清洁机器人、服务机器人、远程监控机器人、扫地机器人等机器人,由于移动机器人多数安装有摄像头,以实现室内环境的监控,由于摄像头在启动之后一直处于工作的状态,但当移动机器人进入私密区域时,获取私密区域的视频或图像容易造成用户的隐私泄露,导致隐私安全的问题。
基于此,本发明实施例提供一种摄像控制***、方法及移动机器人,有效实现对用户隐私的保护。
请参阅图1,图1是本发明实施例提供的一种应用环境的示意图;
如图1所示,移动机器人、客户端以及服务器通过网络通信连接,其中, 所述网络包括有线网络和/或无线网络。
可以理解的是,所述网络包括2G、3G、4G、5G、无线局域网、蓝牙等无线网络,也可以包括串口线、网线等有线网络。
可以理解的是,所述移动机器人包括但不限于清洁机器人、服务机器人、远程监控机器人、扫地机器人等机器人。
可以理解的是,所述客户端包括但不限于:
(1)移动通信设备:这类设备的特点是具备移动通信功能,并且以提供话音、数据通信为主要目标。这类电子设备包括:智能手机(例如iPhone)、多媒体手机、功能性手机,以及低端手机等。
(2)移动个人计算机设备:这类设备属于个人计算机的范畴,有计算和处理功能,一般也具备移动上网特性。这类电子设备包括:PDA、MID和UMPC设备等,例如iPad。
(3)便携式娱乐设备:这类设备可以显示和播放视频内容,一般也具备移动上网特性。该类设备包括:视频播放器,掌上游戏机,以及智能玩具和便携式车载导航设备。
(4)其他具有视频播放功能和上网功能的电子设备。
可以理解的是,所述服务器包括但不限于:
(1)塔式服务器
一般的塔式服务器机箱和我们常用的PC机箱差不多,而大型的塔式机箱就要粗大很多,总的来说外形尺寸没有固定标准。
(2)机架式服务器
机架式服务器是由于满足企业的密集部署,形成的以19英寸机架作为标准宽度的服务器类型,高度则从1U到数U。将服务器放置到机架上,并不仅仅有利于日常的维护及管理,也可能避免意想不到的故障。首先,放置服务器不占用过多空间。机架服务器整齐地排放在机架中,不会浪费空间。其次,连接线等也能够整齐地收放到机架里。电源线和LAN线等全都能在机柜中布好线,可以减少堆积在地面上的连接线,从而防止脚踢掉电线等事故的发生。规定的尺寸是服务器的宽(48.26cm=19英寸)与高(4.445cm的倍数)。由于 宽为19英寸,所以有时也将满足这一规定的机架称为“19英寸机架”。
(3)刀片式服务器
刀片服务器是一种HAHD(High Availability High Density,高可用高密度)的低成本服务器平台,是专门为特殊应用行业和高密度计算机环境设计的,其中每一块“刀片”实际上就是一块***母板,类似于一个个独立的服务器。在这种模式下,每一个母板运行自己的***,服务于指定的不同用户群,相互之间没有关联。不过可以使用***软件将这些母板集合成一个服务器集群。在集群模式下,所有的母板可以连接起来提供高速的网络环境,可以共享资源,为相同的用户群服务。
(4)云服务器
云服务器(Elastic Compute Service,ECS)是一种简单高效、安全可靠、处理能力可弹性伸缩的计算服务。其管理方式比物理服务器更简单高效,用户无需提前购买硬件,即可迅速创建或释放任意多台云服务器。云服务器的分布式存储用于将大量服务器整合为一台超级计算机,提供大量的数据存储和处理服务。分布式文件***、分布式数据库允许访问共同存储资源,实现应用数据文件的IO共享。虚拟机可以突破单个物理机的限制,动态的资源调整与分配消除服务器及存储设备的单点故障,实现高可用性。
请再参阅图2,图2是本发明实施例提供的一种摄像控制***的结构示意图;
如图2所示,该摄像控制***100,包括:摄像单元10、激光雷达20、计算单元30以及客户端40,其中,所述计算单元30分别连接所述摄像单元10、激光雷达20以及客户端40。
其中,所述摄像单元10,设置于所述移动机器人100的机身,用于获取图像数据和/或视频数据;
具体的,所述摄像单元10通信连接所述计算单元30,所述摄像单元10设置于所述移动机器人100的机身,用于获取所述摄像单元10的覆盖范围内的图像数据和/或视频数据,例如:获取某一密闭空间内的图像数据和/或视频数据,并将获取到的图像数据和/或视频数据发送到所述计算单元30。在本发 明实施例中,所述摄像单元10包括红外摄像头、夜视摄像头、网络摄像头、数字摄像头、高清摄像头、4K摄像头、8K高清摄像头等摄像装置。
其中,所述激光雷达20,设置于所述移动机器人100的机身,例如所述激光雷达20设置于所述移动机器人100的机身上的移动底盘,所述激光雷达用于获取激光点云数据;
具体的,所述激光雷达20通信连接所述计算单元30,设置于所述移动机器人100的机身,所述激光雷达20用于获取监控范围内的激光点云数据,所述移动机器人100的机身上的移动底盘设置有通信模组,所述激光雷达获取的激光点云数据通过所述通信模组发送到所述计算单元30。在本发明实施例中,所述移动底盘包括全能型通用底盘、拱腰式移动底盘等机器人移动底盘。
其中,所述计算单元30,通信连接所述摄像单元10、所述激光雷达20以及所述客户端40,用于获取所述摄像单元10获取到的图像数据和/或视频数据,并获取所述激光雷达20的激光雷达获取到的激光点云数据,并根据所述激光点云数据,构建环境地图。其中,所述计算单元30通过激光SLAM算法对监控区域的激光点云数据进行运算,构建环境地图。在本发明实施例中,所述激光SLAM算法包括粒子滤波、图优化等方法,所述计算单元30包括具有计算能力的电路板,例如:PCB电路板,或者,所述计算单元30包括处理器,例如:CPU、GPU等处理器。
可以理解的是,所述计算单元30还用于控制所述移动机器人的运动,例如:通过控制移动机器人机身上的移动底盘,从而控制所述移动机器人的运动。
其中,所述客户端40,通信连接所述计算单元30,用于接收所述计算单元30发送的环境地图,并对所述环境地图进行区域选择,生成带有标签的环境地图并向所述计算单元30发送所述带有标签的环境地图,以使所述计算单元30根据所述带有标签的环境地图,结合所述移动机器人100的当前位置,确定关闭或打开所述摄像单元10。
在本发明实施例中,所述客户端40包括但不限于移动通信设备、移动个人计算机设备、便携式娱乐设备等电子设备,其中,所述客户端40安装有应用程序APP,用户可以通过所述应用程序APP接收所述计算单元30发送的环 境地图,用户也可以通过所述应用程序APP向所述计算单元30发送命令,以使所述计算单元30控制所述移动机器人100执行所述命令,例如:向所述计算单元30发送待机命令,以使所述计算单元30控制所述移动机器人100进入待机状态,或者,向所述计算单元30发送摄像单元关闭命令,以使所述计算单元30控制所述移动机器人100的摄像单元10关闭。
其中,所述客户端的应用程序APP接收所述计算单元30发送的环境地图,以使用户通过所述应用程序APP对所述环境地图进行区域选择,被选择的区域被视为私密区域,并自动为所述私密区域添加标签,从而生成带有标签的环境地图,并向所述计算单元30发送所述带有标签的环境地图,以使所述计算单元30根据所述带有标签的环境地图,结合所述移动机器人100的当前位置,确定关闭或打开所述摄像单元10,例如:若所述移动机器人100的当前位置位于所述带有标签的私密区域时,则控制所述摄像单元10关闭,若所述移动机器人100的当前位置位于不带有标签的区域,则控制所述摄像单元10打开。
请再参阅图3,图3是本发明实施例提供的另一种摄像控制***的结构示意图;
如图3所示,该移动机器人100包括:摄像单元10、激光雷达20、计算单元30、客户端40、服务器50、通信模块60以及语音识别模块70,其中,所述计算单元30分别连接所述摄像单元10、激光雷达20、通信模块60以及语音识别模块70,所述通信模块60连接所述服务器50,所述服务器50连接所述客户端40。
具体的,所述摄像单元10通信连接所述计算单元30,所述摄像单元10设置于所述移动机器人100的机身,用于获取所述摄像单元10的覆盖范围内的图像数据和/或视频数据,例如:获取某一密闭空间内的图像数据和/或视频数据,并将获取到的图像数据和/或视频数据发送到所述计算单元30。在本发明实施例中,所述摄像单元10包括红外摄像头、夜视摄像头、网络摄像头、数字摄像头、高清摄像头、4K摄像头、8K高清摄像头等摄像装置。
具体的,所述激光雷达20通信连接所述计算单元30,设置于所述移动机器人100的机身,例如设置于所述移动机器人100的机身上的移动底盘,所述 激光雷达用于获取监控范围内的激光点云数据,所述移动底盘设置有通信模组,所述激光雷达获取的激光点云数据通过所述移动底盘的通信模组发送到所述计算单元30。在本发明实施例中,所述移动底盘包括全能型通用底盘、拱腰式移动底盘等机器人移动底盘,所述激光雷达20包括脉冲激光雷达、连续波激光雷达等雷达。
具体的,所述计算单元30,通信连接所述摄像单元10、所述激光雷达20以及所述通信模块60,用于获取所述摄像单元10获取到的图像数据和/或视频数据,并获取所述激光雷达获取到的激光点云数据,并根据所述激光点云数据,构建环境地图。其中,所述计算单元30通过激光SLAM算法对监控区域的激光点云数据进行运算,构建环境地图。其中,所述计算单元30通过向所述通信模块60发送所述环境地图,以使所述通信模块60将所述环境地图发送到所述服务器50。在本发明实施例中,所述激光SLAM算法包括粒子滤波、图优化等方法,所述计算单元30包括具有计算能力的电路板,例如:PCB电路板,或者,所述计算单元30包括处理器,例如:CPU、GPU等处理器,或者微控制单元(Microcontroller Unit,MCU)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)、***级芯片(System on Chip,SoC)中的一种或多种组合。
在本发明实施例中,所述计算单元30包括存储模块,所述存储模块包括但不限于:FLASH闪存、NAND闪存、垂直NAND闪存(VNAND)、NOR闪存、电阻随机存取存储器(RRAM)、磁阻随机存取存储器(MRAM)、铁电随机存取存储器(FRAM)、自旋转移扭矩随机存取存储器(STT-RAM)等设备中的一种或多种。
具体的,所述客户端40,通信连接所述服务器50,用于接收所述服务器50发送的环境地图,并对所述环境地图进行区域选择,生成带有标签的环境地图并向所述服务器50发送所述带有标签的环境地图,以使所述服务器50将所述带有标签的环境地图发送到所述通信模块60,以使所述通信模块60将所述带有标签的环境地图转发到所述计算单元30,以使所述计算单元30根据所述带有标签的环境地图,结合所述移动机器人100的当前位置,确定关闭或 打开所述摄像单元10。
具体的,所述服务器50,通信连接所述通信模块60以及所述客户端40,用于接收所述通信模块60发送的环境地图,并将所述环境地图发送到所述客户端40,所述服务器50还用于接收所述客户端40发送的带有标签的环境地图,并将所述带有标签的环境地图发送到所述通信模块60,可以理解的是,所述服务器50包括存储模块,所述存储模块可以用于存储所述环境地图以及所述带有标签的环境地图。在本发明实施例中,所述服务器50包括但不限于塔式服务器、机架式服务器、刀片式服务器以及云服务器。
具体的,所述通信模块60,通信连接所述计算单元30以及服务器50,所述通信模块60用于转发和接收信息,例如:转发所述计算单元30发送的环境地图,接收所述服务器50发送带有标签的环境地图,在本发明实施例中,所述通信模块60可以实现与因特网、互联网的通信,其中,所述通信模块60包括但不限于WIFI模块、ZigBee模块、NB_IoT模块、4G模块、5G模块、蓝牙模块等通信单元。
具体的,所述语音识别模块70,通信连接所述计算单元30,用于获取语音信息,并根据所述语音信息,生成控制编码,并将所述控制编码发送到所述计算单元30,以使所述计算单元30根据所述控制编码,生成控制指令,从而控制所述摄像单元10的开启或关闭。其中,所述语音识别模块70通信连接所述客户端40,用于接收所述客户端40发送的语音信息,并根据所述语音信息,生成控制编码,例如:所述语音识别模块获取到所述客户端发送的语音信息后,按照预设的协议将所述语音信息输出为二进制编码,其中,所述预设的协议可以为语音信息对应的音节对应的英文字母,将所述音节对应的英文字母解析为二进制编码,并将所述二进制编码传输给所述计算单元,例如:当所述语音信息为“开启摄像头”时,所述语音识别模块将“开启”识别为“kaiqi”,并将“kaiqi”转换为二进制编码,其中,每一英文字母对应一个唯一的二进制编码,从而获取所述“开启”对应的二进制编码,当所述计算单元获取到所述语音识别模块发送的二进制编码后,按照预设的通信协议,例如:TCP/IP协议、UDP协议等通信协议,生成与所述二进制编码相应的控制指令,从而控制所述摄像单元的开关, 其中,所述预设的通信协议还可以为自定义的通信协议,例如:0x11代表摄像头打开,0x00代表摄像头关闭。
可以理解的是,所述语音识别模块可以包括语音库,所述语音库包括用户预设的短语或句子,例如:开启、关闭、打开、关机等短语或句子,所述语音库中的每一短语或句子对应相应的二进制编码,当所述语音识别模块接收到客户端发送的语音信息时,识别所述语音信息中的短语或句子,将所述语音信息中的短语或句子与所述语音识别模块中的语音库中的短语或句子相匹配,当匹配到所述语音库中的短语或句子时,将自动获取所述短语或句子对应的二进制编码,从而减少将语音信息转换为二进制编码的时间,提高语音识别的速度。
在本发明实施例中,通过提供一种摄像控制***,包括:摄像单元,设置于所述移动机器人的机身,用于获取图像数据和/或视频数据;移动底盘,设置于所述移动机器人的机身,其中,所述移动底盘安装有激光雷达,所述激光雷达用于获取激光点云数据;计算单元,通信连接所述摄像单元以及所述移动底盘,用于根据所述构建环境地图;客户端,通信连接所述计算单元,用于接收所述计算单元发送的环境地图,并对所述环境地图进行区域选择,生成带有标签的环境地图并向所述计算单元发送所述带有标签的环境地图,以使所述计算单元根据所述带有标签的环境地图,结合所述移动机器人的当前位置,确定关闭或打开所述摄像单元。通过对环境地图进行标签,进而控制摄像单元的开关,本发明能够有效实现对用户隐私的保护。
请再参阅图4,图4是本发明实施例提供的一种摄像控制方法的流程示意图;
如图4所示,该摄像控制方法,包括:
步骤S10:获取激光点云数据;
具体的,所述摄像控制方法,应用于移动机器人,所述移动机器人包括移动底盘,所述移动底盘设置有激光雷达,所述激光雷达用于获取所述移动机器人的监控区域内的激光点云数据。
步骤S20:基于激光SLAM算法,构建环境地图;
其中,所述激光SLAM算法,(SLAM,Simultaneous Localization and  Mapping),包括:卡尔曼滤波、粒子滤波、图优化方法,通过所述激光SLAM算法,构建环境地图。
步骤S30:对所述环境地图进行房间分割,生成包含多个房间区域的环境地图;
其中,通过分水岭算法对所述环境地图进行房间分割,生成包含多个房间区域的环境地图。
具体的,请再参阅图5,图5是图4中的步骤S30的细化流程图;
如图5所示,该对所述环境地图进行房间分割,生成包含多个房间区域的环境地图,包括:
步骤S31:去除所述环境地图中的非结构化障碍物,对所述环境地图进行灰度化处理,生成预处理灰度地图;
具体的,通过获取所述环境地图的彩色图像,去除所述环境地图中的非结构化障碍物,例如:桌子、椅子等障碍物,并将所述环境地图的彩色图像转化为灰度图像,生成预处理灰度地图。
步骤S32:对所述预处理灰度地图进行滤波,并对滤波后的灰度地图进行边缘检测;
具体的,对所述预处理灰度地图进行滤波,例如:对所述预处理灰度地图进行卡尔曼滤波、粒子滤波,并对滤波后的灰度地图进行边缘检测。
步骤S33:查找地图轮廓,对不同区域的轮廓进行编号;
具体的,查找地图轮廓,所述滤波后的灰度地图包括多个不同区域的轮廓,对所述不同区域的轮廓进行编号,每个轮廓有一个自己唯一的编号,相当于对所述滤波后的灰度地图设置注水点,所述注水点的数量与所述轮廓的数量相等。
步骤S34:根据相邻像素的相似性,生成封闭轮廓,根据所述封闭轮廓,对所述环境地图进行分割。
具体的,根据相邻像素的相似性,将在空间位置相近且灰度值相近的像素点互相连接构成一个新的封闭轮廓,再根据处理后的区域轮廓对地图进行分割。
步骤S40:向客户端发送所述包含多个房间区域的环境地图;
具体的,计算单元通过通信模块向服务器发送所述包含多个房间区域的环境地图,以使所述服务器将所述包含多个房间区域的环境地图发送到所述客户端,或者,所述计算单元直接向所述客户端发送所述包含多个房间区域的环境地图,即,所述计算单元通过通信模块向服务器发送分割后的环境地图,以使所述服务器将所述分割后的环境地图发送到所述客户端,或者,所述计算单元直接向所述客户端发送所述分割后的环境地图。
步骤S50:接收客户端发送的带有标签的环境地图,所述标签用于标识私密区域;
具体的,所述客户端在接收到所述服务器或计算单元发送的包含多个房间区域的环境地图后,即所述客户端在接收到所述服务器或计算单元发送的分割后的环境地图后,将对所述分割后的环境地图进行区域选择,即选择所述环境地图中的私密区域,在选择所述环境地图中的私密区域后,将对所述私密区域设置标签,或者,对的所有的区域进行标签,例如:0代表非私密区域,1代表私密区域,因此将用户选择的所有私密区域设置为标签1,其他区域设置为标签0。然后客户端将设置有标签的环境地图通过因特网或互联网传输给服务器,以使所述服务器将所述带有标签的环境地图发送给通信模块,所述通信模块将其传输给计算单元,或者,所述客户端将设置有标签的环境地图直接发送到所述计算单元。
步骤S60:根据所述带有标签的环境地图,结合所述移动机器人的当前位置,确定关闭或打开所述摄像单元。
其中,所述带有标签的环境地图中标识有私密区域的标签和/或非私密区域的标签,所述计算单元根据所述标识有私密区域的标签确定是否存在私密区域。
具体的,请再参阅图6,图6是图4中的步骤S60的细化流程图;
如图6所示,该根据所述带有标签的环境地图,结合所述移动机器人的当前位置,确定关闭或打开所述摄像单元,包括:
步骤S61:根据激光SLAM算法,输出所述移动机器人的当前位置;
步骤S62:确定所述移动机器人的当前位置是否处于所述带有标签的环境地图中的私密区域;若是,则进入步骤S621:关闭所述摄像单元;若否,则进入步骤S622:开启所述摄像单元。
步骤S621:关闭所述摄像单元;
步骤S622:开启所述摄像单元;
在本发明实施例中,通过提供一种摄像控制方法,应用于上述实施例所述的摄像控制***,所述方法包括:获取激光点云数据;基于激光SLAM算法,构建环境地图;对所述环境地图进行房间分割,生成包含多个房间区域的环境地图;向客户端发送所述包含多个房间区域的环境地图;接收客户端发送的带有标签的环境地图,所述标签用于标识私密区域;根据所述带有标签的环境地图,结合所述移动机器人的当前位置,确定关闭或打开所述摄像单元。通过对环境地图进行标签,进而控制摄像单元的开关,本发明能够有效实现对用户隐私的保护。
请再参阅图7,图7是本发明实施例提供的一种摄像控制装置的结构示意图;
如图7所示,该摄像控制装置70,包括:
点云数据获取单元71,用于获取激光点云数据;
环境地图构建单元72,用于基于激光SLAM算法,构建环境地图;
房间区域分割单元73,用于对所述环境地图进行房间分割,生成包含多个房间区域的环境地图;
环境地图发送单元74,用于向客户端发送所述包含多个房间区域的环境地图;
环境地图接收单元75,用于接收客户端发送的带有标签的环境地图,所述标签用于标识私密区域;
摄像单元开关单元76,用于根据所述带有标签的环境地图,结合所述移动机器人的当前位置,确定关闭或打开所述摄像单元。
在本发明实施例中,所述房间区域分割单元73,具体用于:
去除所述环境地图中的非结构化障碍物,对所述环境地图进行灰度化处 理,生成预处理灰度地图;
对所述预处理灰度地图进行滤波,并对滤波后的灰度地图进行边缘检测;
查找地图轮廓,对不同区域的轮廓进行编号;
根据相邻像素的相似性,生成封闭轮廓,根据所述封闭轮廓,对所述环境地图进行分割。
在本发明实施例中,所述摄像单元开关单元76,具体用于:
根据激光SLAM算法,输出所述移动机器人的当前位置;
确定所述移动机器人的当前位置是否处于所述带有标签的环境地图中的私密区域;
若所述移动机器人的当前位置处于所述带有标签的环境地图中的私密区域,则关闭所述摄像单元;
若所述移动机器人的当前位置未处于所述带有标签的环境地图中的私密区域,则开启所述摄像单元。
在本发明实施例中,通过提供一种摄像控制装置,包括:点云数据获取单元,用于获取激光点云数据;环境地图构建单元,用于基于激光SLAM算法,构建环境地图;房间区域分割单元,用于对所述环境地图进行房间分割,生成包含多个房间区域的环境地图;环境地图发送单元,用于向客户端发送所述包含多个房间区域的环境地图;环境地图接收单元,用于接收客户端发送的带有标签的环境地图,所述标签用于标识私密区域;摄像单元开关单元,用于根据所述带有标签的环境地图,结合所述移动机器人的当前位置,确定关闭或打开所述摄像单元。通过对环境地图进行标签,进而控制摄像单元的开关,本发明能够有效实现对用户隐私的保护。
请再参阅图8,图8是本发明实施例提供的一种移动机器人的结构示意图;
如图8所示,该移动机器人80包括一个或多个处理器81以及存储器82。其中,图8中以一个处理器81为例。
处理器81和存储器82可以通过总线或者其他方式连接,图8中以通过总线连接为例。
存储器82作为一种非易失性计算机可读存储介质,可用于存储非易失性 软件程序、非易失性计算机可执行程序以及模块,如本申请实施例中的一种摄像控制方法对应的单元(例如,图7所述的各个单元)。处理器81通过运行存储在存储器82中的非易失性软件程序、指令以及模块,从而执行摄像控制方法的各种功能应用以及数据处理,即实现上述方法实施例的摄像控制方法以及上述装置实施例的各个模块和单元的功能。
存储器82可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他非易失性固态存储器件。在一些实施例中,存储器82可选包括相对于处理器81远程设置的存储器,这些远程存储器可以通过网络连接至处理器81。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
所述模块存储在所述存储器82中,当被所述一个或者多个处理器81执行时,执行上述任意方法实施例中的摄像控制方法,例如,执行以上描述的图4所示的各个步骤;也可实现图7所述的各个模块或单元的功能。
本申请实施例的电子设备以多种形式存在,在执行以上描述的图4所示的各个步骤;也可实现图7所述的各个单元的功能时,包括但不限于:清洁机器人、服务机器人、远程监控机器人、扫地机器人等机器人。
在本说明书的描述中,参考术语“第一个实施例”、“第二个实施例”、“本发明的实施例”、“一个实施方式”、“一种实施方式”、“一个实施例”、“示例”、“具体示例”或“一些示例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本发明的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不一定指的是相同的实施例或示例。而且,描述的具体特征、结构、材料或特点可以在任何的一个或多个实施例或示例中以合适的方式结合。
以上所述的实施方式,并不构成对该技术方案保护范围的限定。任何在上述实施方式的精神和原则之内所作的修改、等同替换和改进等,均应包含在该技术方案的保护范围之内。

Claims (20)

  1. 一种摄像控制***,包括:移动机器人以及客户端,其中,所述移动机器人包括:
    摄像单元,设置于所述移动机器人的机身,用于获取图像数据和/或视频数据;
    激光雷达,设置于所述移动机器人的机身,用于获取激光点云数据;
    计算单元,通信连接所述摄像单元以及所述激光雷达,用于根据所述激光点云数据,构建环境地图;
    其中,所述客户端,通信连接所述计算单元,用于接收所述计算单元发送的环境地图,并对所述环境地图进行区域选择,生成带有标签的环境地图并向所述计算单元发送所述带有标签的环境地图,以使所述计算单元根据所述带有标签的环境地图,结合所述移动机器人的当前位置,确定关闭或打开所述摄像单元。
  2. 根据权利要求1所述的***,还包括:
    服务器,连接所述计算单元以及客户端,用于获取所述计算单元发送的环境地图,并将所述环境地图发送到所述客户端。
  3. 根据权利要求2所述的***,还包括:
    通信模块,连接所述计算单元,用于所述计算单元通信连接所述客户端和/或所述服务器。
  4. 根据权利要求1所述的***,还包括:
    语音识别模块,通信连接所述计算单元,用于获取语音信息,并根据所述语音信息,生成控制编码,并将所述控制编码发送到所述计算单元,以使所述计算单元根据所述控制编码,生成控制指令,从而控制所述摄像单元的开启或关闭。
  5. 一种摄像控制方法,应用于摄像控制***,所述摄像控制***包括:移动机器人以及客户端,其中,所述移动机器人包括:
    摄像单元,设置于所述移动机器人的机身,用于获取图像数据和/或视频数据;
    激光雷达,设置于所述移动机器人的机身,用于获取激光点云数据;
    计算单元,通信连接所述摄像单元以及所述激光雷达,用于根据所述激光点云数据,构建环境地图;
    其中,所述客户端,通信连接所述计算单元,用于接收所述计算单元发送的环境地图,并对所述环境地图进行区域选择,生成带有标签的环境地图并向所述计算单元发送所述带有标签的环境地图,以使所述计算单元根据所述带有标签的环境地图,结合所述移动机器人的当前位置,确定关闭或打开所述摄像单元;
    所述方法包括:
    获取激光点云数据;
    基于激光SLAM算法,构建环境地图;
    对所述环境地图进行房间分割,生成包含多个房间区域的环境地图;
    向客户端发送所述包含多个房间区域的环境地图;
    接收客户端发送的带有标签的环境地图,所述标签用于标识私密区域;
    根据所述带有标签的环境地图,结合所述移动机器人的当前位置,确定关闭或打开所述摄像单元。
  6. 根据权利要求5所述的方法,其中,所述对所述环境地图进行房间分割,生成包含多个房间区域的环境地图,包括:
    去除所述环境地图中的非结构化障碍物,对所述环境地图进行灰度化处理,生成预处理灰度地图;
    对所述预处理灰度地图进行滤波,并对滤波后的灰度地图进行边缘检测;
    查找地图轮廓,对不同区域的轮廓进行编号;
    根据相邻像素的相似性,生成封闭轮廓,根据所述封闭轮廓,对所述环境地图进行分割。
  7. 根据权利要求5所述的方法,其中,所述根据所述带有标签的环境地图,结合所述移动机器人的当前位置,确定关闭或打开所述摄像单元,包括:
    根据激光SLAM算法,输出所述移动机器人的当前位置;
    确定所述移动机器人的当前位置是否处于所述带有标签的环境地图中的 私密区域;
    若所述移动机器人的当前位置处于所述带有标签的环境地图中的私密区域,则关闭所述摄像单元;
    若所述移动机器人的当前位置未处于所述带有标签的环境地图中的私密区域,则开启所述摄像单元。
  8. 根据权利要求5所述的方法,还包括:
    向服务器发送所述环境地图,以使所述服务器向所述客户端发送所述环境地图;
    接收所述服务器发送的带有标签的环境地图;
    根据所述带有标签的环境地图,结合所述移动机器人的当前位置,确定关闭或打开所述摄像单元。
  9. 根据权利要求5所述的方法,还包括:
    获取所述客户端发送的语音信息;
    根据所述语音信息,生成控制指令,基于所述控制指令,控制所述摄像单元的开启或关闭。
  10. 根据权利要求5所述的方法,其中,所述摄像控制***还包括:
    服务器,连接所述计算单元以及客户端,用于获取所述计算单元发送的环境地图,并将所述环境地图发送到所述客户端。
  11. 根据权利要求10所述的方法,其中,所述摄像控制***还包括:
    通信模块,连接所述计算单元,用于所述计算单元通信连接所述客户端和/或所述服务器。
  12. 根据权利要求5所述的方法,其中,所述摄像控制***还包括:
    语音识别模块,通信连接所述计算单元,用于获取语音信息,并根据所述语音信息,生成控制编码,并将所述控制编码发送到所述计算单元,以使所述计算单元根据所述控制编码,生成控制指令,从而控制所述摄像单元的开启或关闭。
  13. 一种移动机器人,包括:
    至少一个处理器;和
    与所述至少一个处理器通信连接的存储器;其中,
    所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行摄像控制方法;
    所述摄像控制方法应用于摄像控制***,所述摄像控制***包括:移动机器人以及客户端,其中,所述移动机器人包括:
    摄像单元,设置于所述移动机器人的机身,用于获取图像数据和/或视频数据;
    激光雷达,设置于所述移动机器人的机身,用于获取激光点云数据;
    计算单元,通信连接所述摄像单元以及所述激光雷达,用于根据所述激光点云数据,构建环境地图;
    其中,所述客户端,通信连接所述计算单元,用于接收所述计算单元发送的环境地图,并对所述环境地图进行区域选择,生成带有标签的环境地图并向所述计算单元发送所述带有标签的环境地图,以使所述计算单元根据所述带有标签的环境地图,结合所述移动机器人的当前位置,确定关闭或打开所述摄像单元;
    所述方法包括:
    获取激光点云数据;
    基于激光SLAM算法,构建环境地图;
    对所述环境地图进行房间分割,生成包含多个房间区域的环境地图;
    向客户端发送所述包含多个房间区域的环境地图;
    接收客户端发送的带有标签的环境地图,所述标签用于标识私密区域;
    根据所述带有标签的环境地图,结合所述移动机器人的当前位置,确定关闭或打开所述摄像单元。
  14. 根据权利要求13所述的移动机器人,其中,所述对所述环境地图进行房间分割,生成包含多个房间区域的环境地图,包括:
    去除所述环境地图中的非结构化障碍物,对所述环境地图进行灰度化处理,生成预处理灰度地图;
    对所述预处理灰度地图进行滤波,并对滤波后的灰度地图进行边缘检测;
    查找地图轮廓,对不同区域的轮廓进行编号;
    根据相邻像素的相似性,生成封闭轮廓,根据所述封闭轮廓,对所述环境地图进行分割。
  15. 根据权利要求13所述的移动机器人,其中,所述根据所述带有标签的环境地图,结合所述移动机器人的当前位置,确定关闭或打开所述摄像单元,包括:
    根据激光SLAM算法,输出所述移动机器人的当前位置;
    确定所述移动机器人的当前位置是否处于所述带有标签的环境地图中的私密区域;
    若所述移动机器人的当前位置处于所述带有标签的环境地图中的私密区域,则关闭所述摄像单元;
    若所述移动机器人的当前位置未处于所述带有标签的环境地图中的私密区域,则开启所述摄像单元。
  16. 根据权利要求13所述的移动机器人,其中,所述方法还包括:
    向服务器发送所述环境地图,以使所述服务器向所述客户端发送所述环境地图;
    接收所述服务器发送的带有标签的环境地图;
    根据所述带有标签的环境地图,结合所述移动机器人的当前位置,确定关闭或打开所述摄像单元。
  17. 根据权利要求13所述的移动机器人,其中,所述方法还包括:
    获取所述客户端发送的语音信息;
    根据所述语音信息,生成控制指令,基于所述控制指令,控制所述摄像单元的开启或关闭。
  18. 根据权利要求13所述的移动机器人,其中,所述摄像控制***还包括:
    服务器,连接所述计算单元以及客户端,用于获取所述计算单元发送的环境地图,并将所述环境地图发送到所述客户端。
  19. 根据权利要求18所述的移动机器人,其中,所述摄像控制***还包 括:
    通信模块,连接所述计算单元,用于所述计算单元通信连接所述客户端和/或所述服务器。
  20. 根据权利要求13所述的移动机器人,其中,所述摄像控制***还包括:
    语音识别模块,通信连接所述计算单元,用于获取语音信息,并根据所述语音信息,生成控制编码,并将所述控制编码发送到所述计算单元,以使所述计算单元根据所述控制编码,生成控制指令,从而控制所述摄像单元的开启或关闭。
PCT/CN2020/105634 2019-10-30 2020-07-29 一种摄像控制***、方法及移动机器人 WO2021082565A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911043635.9A CN110716568A (zh) 2019-10-30 2019-10-30 一种摄像控制***、方法及移动机器人
CN201911043635.9 2019-10-30

Publications (1)

Publication Number Publication Date
WO2021082565A1 true WO2021082565A1 (zh) 2021-05-06

Family

ID=69214542

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/105634 WO2021082565A1 (zh) 2019-10-30 2020-07-29 一种摄像控制***、方法及移动机器人

Country Status (2)

Country Link
CN (1) CN110716568A (zh)
WO (1) WO2021082565A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11805175B2 (en) 2021-12-01 2023-10-31 International Business Machines Corporation Management of devices in a smart environment

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110716568A (zh) * 2019-10-30 2020-01-21 深圳市银星智能科技股份有限公司 一种摄像控制***、方法及移动机器人
CN111443627B (zh) * 2020-02-24 2021-11-26 国网浙江省电力有限公司湖州供电公司 一种民宿供电***及其控制方法

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170225336A1 (en) * 2016-02-09 2017-08-10 Cobalt Robotics Inc. Building-Integrated Mobile Robot
CN107566743A (zh) * 2017-10-30 2018-01-09 珠海市微半导体有限公司 移动机器人的视频监控方法
CN108898605A (zh) * 2018-07-25 2018-11-27 电子科技大学 一种基于图的栅格地图分割方法
CN109358340A (zh) * 2018-08-27 2019-02-19 广州大学 一种基于激光雷达的agv室内地图构建方法及***
CN208540016U (zh) * 2018-07-16 2019-02-22 深圳市优必选科技有限公司 摄像头结构及机器人
CN109464074A (zh) * 2018-11-29 2019-03-15 深圳市银星智能科技股份有限公司 区域划分方法、分区清扫方法及其机器人
CN109993780A (zh) * 2019-03-07 2019-07-09 深兰科技(上海)有限公司 一种三维高精度地图生成方法及装置
CN110333495A (zh) * 2019-07-03 2019-10-15 深圳市杉川机器人有限公司 利用激光slam在长走廊建图的方法、装置、***、存储介质
CN110716568A (zh) * 2019-10-30 2020-01-21 深圳市银星智能科技股份有限公司 一种摄像控制***、方法及移动机器人

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170225336A1 (en) * 2016-02-09 2017-08-10 Cobalt Robotics Inc. Building-Integrated Mobile Robot
CN107566743A (zh) * 2017-10-30 2018-01-09 珠海市微半导体有限公司 移动机器人的视频监控方法
CN208540016U (zh) * 2018-07-16 2019-02-22 深圳市优必选科技有限公司 摄像头结构及机器人
CN108898605A (zh) * 2018-07-25 2018-11-27 电子科技大学 一种基于图的栅格地图分割方法
CN109358340A (zh) * 2018-08-27 2019-02-19 广州大学 一种基于激光雷达的agv室内地图构建方法及***
CN109464074A (zh) * 2018-11-29 2019-03-15 深圳市银星智能科技股份有限公司 区域划分方法、分区清扫方法及其机器人
CN109993780A (zh) * 2019-03-07 2019-07-09 深兰科技(上海)有限公司 一种三维高精度地图生成方法及装置
CN110333495A (zh) * 2019-07-03 2019-10-15 深圳市杉川机器人有限公司 利用激光slam在长走廊建图的方法、装置、***、存储介质
CN110716568A (zh) * 2019-10-30 2020-01-21 深圳市银星智能科技股份有限公司 一种摄像控制***、方法及移动机器人

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11805175B2 (en) 2021-12-01 2023-10-31 International Business Machines Corporation Management of devices in a smart environment

Also Published As

Publication number Publication date
CN110716568A (zh) 2020-01-21

Similar Documents

Publication Publication Date Title
WO2021082565A1 (zh) 一种摄像控制***、方法及移动机器人
US20140267413A1 (en) Adaptive facial expression calibration
CN106603969A (zh) 一种视频监控方法、装置和***以及探测设备
US10110691B2 (en) Systems and methods for enabling virtual keyboard-video-mouse for external graphics controllers
US8326960B2 (en) Wake on local area network signalling in a multi-root I/O virtualization
US20210272467A1 (en) Interactive environments using visual computing and immersive reality
US20210385463A1 (en) Resource-efficient video coding and motion estimation
CN108632354A (zh) 物理机纳管方法、装置及云桌面管理平台
US20120137148A1 (en) Rack server device
CN107211550A (zh) 用于无线机架管理控制器通信的***和方法
US20080028053A1 (en) Method and system for a wake on LAN (WOL) computer system startup process
Sunehra et al. An intelligent surveillance with cloud storage for home security
CN102339474A (zh) 使用多个执行线程的层合成、呈现和动画
CN103999044A (zh) 用于多遍渲染的技术
EP2353090B1 (en) System and method for aggregating management of devices connected to a server
US20190018129A1 (en) Speed detection device and communicable coupled virtual display
CN112684965A (zh) 动态壁纸状态变更方法、装置、电子设备及存储介质
US20170261961A1 (en) Method and Apparatus for Protecting Heat Dissipation Fan of Projecting Device
JP2013171435A (ja) サービス提供システム、サービス提供方法、リソースマネージャ、プログラム
US20120185713A1 (en) Server, storage medium, and method for controlling sleep and wakeup function of the server
Gantala et al. Human tracking system using beagle board-xm
CN205407999U (zh) 一种can总线摄像头控制器
US10075398B2 (en) Systems and methods for enabling a host system to use a network interface of a management controller
CN107247593A (zh) 用户接口切换方法、装置、电子设备及存储介质
Devare Analysis and design of IoT based physical location monitoring system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20881215

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 12/10/2022)

122 Ep: pct application non-entry in european phase

Ref document number: 20881215

Country of ref document: EP

Kind code of ref document: A1