WO2022267795A1 - 区域地图的处理方法及装置、存储介质及电子装置 - Google Patents

区域地图的处理方法及装置、存储介质及电子装置 Download PDF

Info

Publication number
WO2022267795A1
WO2022267795A1 PCT/CN2022/094615 CN2022094615W WO2022267795A1 WO 2022267795 A1 WO2022267795 A1 WO 2022267795A1 CN 2022094615 W CN2022094615 W CN 2022094615W WO 2022267795 A1 WO2022267795 A1 WO 2022267795A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
target object
area map
information
area
Prior art date
Application number
PCT/CN2022/094615
Other languages
English (en)
French (fr)
Inventor
王生乐
Original Assignee
追觅创新科技(苏州)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 追觅创新科技(苏州)有限公司 filed Critical 追觅创新科技(苏州)有限公司
Publication of WO2022267795A1 publication Critical patent/WO2022267795A1/zh

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/24Floor-sweeping machines, motor-driven
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • A47L11/4002Installations of electric equipment
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • A47L11/4011Regulation of the cleaning machine by electric means; Control systems and remote control systems therefor
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • A47L11/4061Steering means; Means for avoiding obstacles; Details related to the place where the driver is accommodated
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L2201/00Robotic cleaning machines, i.e. with automatic control of the travelling movement or the cleaning operation
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L2201/00Robotic cleaning machines, i.e. with automatic control of the travelling movement or the cleaning operation
    • A47L2201/04Automatic control of the travelling movement; Automatic obstacle detection

Definitions

  • the present application relates to the communication field, and in particular, relates to a method and device for processing an area map, a storage medium, and an electronic device.
  • the sweeping robot usually interacts with the user through the mobile app (Application, application), and the house map created by the sweeping robot will be displayed on the App.
  • the above-mentioned house map usually only shows whether there is an object at a position, and the user cannot obtain accurate object information from the map.
  • the purpose of the present application is to provide a method and device for processing an area map, a storage medium and an electronic device, so as to at least solve the problem of low information acquisition efficiency in the related art of constructing an area map by a sweeping robot.
  • a method for processing an area map including: acquiring an image of a target object corresponding to a target object in a target area, wherein the target area is an area cleaned by a sweeping robot; according to The target object image is used to identify the target object to obtain a target recognition result, wherein the target recognition result includes description information of the target object; according to the target recognition result, the target object is identified in the target area map Marking, wherein the target object is an object corresponding to the target object in the target area map, and the target area map is an area map established for the target area by the cleaning robot.
  • the method before acquiring the image of the target object corresponding to the target object in the target area, the method further includes: when the cleaning robot cleans the target area , it is detected that the target object exists in the target area.
  • identifying the target object according to the target object image and obtaining the target recognition result includes: performing type identification on the target object according to the target object image to obtain target type information, Wherein, the target type information is used to represent the object type of the target object, and the target recognition result includes the target type information.
  • performing type recognition on the target object according to the target object image, and obtaining the target type information includes: inputting the target object image into a target recognition model, and obtaining an output of the target recognition model
  • the target type information, the target recognition model is obtained by using the object image of the sample object to train the initial recognition model, and the sample object is marked with the corresponding object type.
  • recognizing the target object according to the target object image, and obtaining the target recognition result includes: performing contour recognition on the target object according to the target object image to obtain target contour information, wherein, the target contour information is used to represent the object contour of the target object, and the target recognition result includes the target contour information.
  • marking the target object in the target area map includes: according to the target outline information, marking the target object in the form of a virtual wall in the target area map An outline of the target object is marked.
  • the method further includes: sending the target area map to the target terminal
  • the target application is displayed, wherein the target application uses the target account bound to the sweeping robot to log in.
  • a method for processing an area map including: receiving the target area map sent by the sweeping robot through a target application, wherein the target application uses the The target account is logged in, and the target area map is an area map established for the target area by the sweeping robot; the target area map is displayed on the target display interface of the target application, wherein the target area map is displayed on the target area map There is a target object and target labeling information of the target object, the target object is an object corresponding to the target object in the target area in the target area map, and the target labeling information is used to describe the target object .
  • displaying the target area map on the target display interface of the target application includes: displaying the target object and the target area map on the target display interface.
  • the contour information of the target object wherein the contour information of the target object is used to represent the contour of the target object, and the target annotation information includes the contour information of the target object.
  • displaying the target object and the outline information of the target object in the target area map displayed on the target display interface includes: displaying the target on the target display interface The target object is displayed on the area map, and the contour information of the target object is displayed in the form of a virtual wall.
  • displaying the target area map on the target display interface of the target application includes: displaying the target object and target on the target area map displayed on the target display interface Type information, wherein the target type information is used to represent the object type of the target object, and the target label information includes the target type information.
  • an area map processing device including: an acquisition unit, configured to acquire an image of a target object corresponding to a target object in a target area, wherein the target area is a sweeping The area cleaned by the robot; the identification unit is used to identify the target object according to the image of the target object to obtain a target recognition result, wherein the target recognition result includes description information of the target object; the labeling unit uses According to the target recognition result, the target object is marked in the target area map, wherein the target object is an object corresponding to the target object in the target area map, and the target area map is obtained through the The area map established by the sweeping robot for the target area.
  • the device further includes: a detection unit configured to, before acquiring the image of the target object corresponding to the target object in the target area, detect the During the process of cleaning the target area, it is detected that the target object exists in the target area.
  • a detection unit configured to, before acquiring the image of the target object corresponding to the target object in the target area, detect the During the process of cleaning the target area, it is detected that the target object exists in the target area.
  • the identification unit includes: a first identification module, configured to identify the type of the target object according to the image of the target object to obtain target type information, wherein the target type information is used for Indicates the object type of the target object, and the target recognition result includes the target type information.
  • the first recognition module includes: an input submodule, configured to input the image of the target object into a target recognition model to obtain the target type information output by the target recognition model, the The target recognition model is obtained by training the initial recognition model by using object images of sample objects marked with corresponding object types.
  • the recognition unit includes: a second recognition module, configured to perform contour recognition on the target object according to the target object image to obtain target contour information, wherein the target contour information is used for Indicates the object contour of the target object, and the target recognition result includes the target contour information.
  • the labeling unit includes: a labeling module, configured to mark an outline of the target object in the form of a virtual wall in the target area map according to the target outline information.
  • the device further includes: a sending unit, configured to send the target area map after marking the target object in the target area map according to the target recognition result
  • the target application on the target terminal is displayed, wherein the target application uses the target account bound to the sweeping robot to log in.
  • an area map processing device including: a receiving unit, configured to receive the target area map sent by the sweeping robot through the target application, wherein the target application uses the same The target account bound to the sweeping robot is logged in, and the target area map is an area map established for the target area by the sweeping robot; a display unit is used to display the target area map on the target display interface of the target application, Wherein, a target object and target annotation information of the target object are displayed on the target area map, the target object is an object corresponding to a target object in the target area in the target area map, and the The target annotation information is used to describe the target object.
  • the display unit includes: a first display module, configured to display the target object and the outline information of the target object in the target area map displayed on the target display interface, Wherein, the contour information of the target object is used to represent the contour of the target object, and the target label information includes the contour information of the target object.
  • the first display module includes: a display submodule, configured to display the target object in the target area map displayed on the target display interface, and display the target object in the form of a virtual wall Outline information of the target object.
  • the display unit includes: a second display module, configured to display the target object and target category information in the target area map displayed on the target display interface, wherein the The target type information is used to represent the object type of the target object, and the target label information includes the target type information.
  • the target object image corresponding to the target object in the target area is obtained by marking the corresponding object on the area map according to the description information of the object, where the target area is cleaned by the sweeping robot region; identify the target object according to the target object image, and obtain the target recognition result, wherein the target recognition result contains the description information of the target object; according to the target recognition result, mark the target object in the target area map, wherein the target object is The object corresponding to the target object in the target area map.
  • the target area map is the area map established for the target area by the sweeping robot.
  • the obtained recognition result contains the description information of the object (for example, object types, object outlines, etc.), which can ensure that the obtained description information can accurately describe the objects in the area.
  • the corresponding objects in the area map are marked according to the description information of the objects, the corresponding objects of the objects can be displayed in the area map.
  • display the labeling information of the object so as to achieve the purpose of quickly obtaining object information from the area map, achieve the technical effect of improving the accuracy and efficiency of object information acquisition, and then solve the problem of building area maps by sweeping robots in related technologies There is a problem of low efficiency of information acquisition.
  • Fig. 1 is the schematic diagram of the hardware environment of a kind of optional area map processing method according to the embodiment of the application;
  • FIG. 2 is a schematic flowchart of an optional method for processing an area map according to an embodiment of the present application
  • FIG. 3 is a schematic flowchart of another optional method for processing an area map according to an embodiment of the present application.
  • FIG. 4 is a schematic flowchart of another optional method for processing an area map according to an embodiment of the present application.
  • FIG. 5 is a schematic diagram of an optional area map according to an embodiment of the present application.
  • Fig. 6 is a structural block diagram of an optional area map processing device according to an embodiment of the present application.
  • Fig. 7 is a structural block diagram of another optional area map processing device according to an embodiment of the present application.
  • Fig. 8 is a structural block diagram of an optional electronic device according to an embodiment of the present application.
  • a method for processing an area map is provided.
  • the above method for processing an area map may be applied to a hardware environment composed of a terminal 102 and a server 104 as shown in FIG. 1 .
  • the server 104 is connected to the terminal 102 through the network, and can be used to provide services (such as game services, application services, etc.) It is used to provide data storage service for the server 104.
  • the above-mentioned terminal 102 may include one or more terminals, and different terminals may be connected by communication through a server, or may be directly connected by communication without going through a server.
  • the terminal 102 may include at least one of the following: a user terminal, and a cleaning device, where the cleaning device may include a sweeping robot.
  • the foregoing network may include but not limited to at least one of the following: a wired network and a wireless network.
  • the above-mentioned wired network may include but not limited to at least one of the following: wide area network, metropolitan area network, and local area network
  • the above-mentioned wireless network may include but not limited to at least one of the following: WIFI (Wireless Fidelity, Wireless Fidelity), Bluetooth.
  • the terminal 102 may not be limited to a PC, a mobile phone, a tablet computer, and the like.
  • the method for processing an area map in this embodiment of the present application may be executed by the server 104 , may also be executed by the terminal 102 , and may also be executed jointly by the server 104 and the terminal 102 .
  • the method for processing the area map in the embodiment of the present application executed by the terminal 102 may also be executed by a client installed on it.
  • FIG. 2 is a schematic flowchart of an optional area map processing method according to the embodiment of the present application. As shown in FIG. 2 , the method The process can include the following steps:
  • Step S202 acquiring an image of the target object corresponding to the target object in the target area, wherein the target area is the area cleaned by the cleaning robot.
  • a sweeping robot is a general term for a class of devices with a cleaning function, which may be a separate cleaning device or belong to other smart devices, which is not specifically limited in this embodiment.
  • the processing method of the area map in this embodiment may be executed by the sweeping robot, may be executed by the smart device to which the sweeping robot belongs, may also be executed by a background server, or may be executed by other devices with data processing capabilities In this embodiment, the execution by the sweeping robot is taken as an example for description.
  • the target user can use the target account to log in to the target application running on the terminal device (that is, the target terminal).
  • the target application is an application that matches the sweeping robot.
  • the sweeping robot is controlled to realize the interaction between the target user and the sweeping robot. For example, the area map created by the sweeping robot can be displayed to the target user through the target application, and the target user can send cleaning instructions to the sweeping robot through the target application.
  • the sweeping robot can be used to clean a target area.
  • the target area can be a closed area, such as a bedroom, living room, bathroom, etc., or an unenclosed area, such as an outdoor field.
  • the target area may contain a target object, and the target object may be an object with a certain shape, such as a trash can, a wire, slippers, etc., but is not limited thereto, and the type of the target object is not limited in this embodiment.
  • the sweeping robot can be equipped with a data acquisition component, which can be a camera, an infrared sensor, etc., but is not limited thereto. Other components capable of collecting object images can be applied to this embodiment.
  • the data collection component may perform a data collection operation, and the collected data may be a target object image of the target object.
  • the target object image may be an object image of the target object at a certain angle, or may be an object image of the target object at multiple angles.
  • Various operations can be performed by using object images of the target object at multiple angles, for example, the three-dimensional shape of the target object can be constructed, thereby improving the accuracy of recognition.
  • step S204 the target object is recognized according to the image of the target object to obtain a target recognition result, wherein the target recognition result includes description information of the target object.
  • the image of the target object can represent the attribute information of the target object, for example, the shape, color, size, etc. of the object.
  • the target object can be recognized according to the image of the target object to obtain a target recognition result.
  • the obtained target recognition result may include descriptive information of the target object, for example, information describing the object type of the target object, the object outline of the target object, and the like.
  • the above recognition operation may be performed by a sweeping robot. After acquiring the above image of the target object, the cleaning robot can directly perform the above recognition operation according to the image of the target object.
  • the above recognition operation may be performed by other devices such as a smart device to which the sweeping robot belongs, a background server, and the like.
  • the sweeping robot can send the acquired image of the target object to other devices, and after receiving the image of the target object, other devices can perform the above recognition operation according to the image of the target object.
  • the recognition operation can be real-time (for example, perform the recognition operation immediately after acquiring the image of the target object), or non-real-time (for example, perform the recognition operation when the target object image is acquired when idle), this This is not limited in the embodiments.
  • the target object image can contain multiple object images from different angles of the target object.
  • each object image in the multiple object images can be recognized separately to obtain the recognition result of each object image; and then the recognition of each object image The results are fused to obtain the target recognition result.
  • multiple object images may also be recognized simultaneously, and the target recognition result may be obtained by fusing image features of each object image.
  • identifying the target object according to the target object image may be: constructing a target object model corresponding to the target object according to the target object image, where the target object model is used to represent the three-dimensional shape of the target object; The model recognizes the target object.
  • Step S206 mark the target object in the target area map according to the target recognition result, wherein the target object is an object corresponding to the target object in the target area map, and the target area map is an area map established for the target area by the sweeping robot.
  • the area map established for the target area by the sweeping robot is the target area map.
  • Creating an area map through the sweeping robot refers to: using the data obtained by the sweeping robot to collect data in a certain area to establish an area map of the area.
  • the operation of creating an area map may be performed by the cleaning robot, or may be performed by other devices (for example, a smart device to which the cleaning robot belongs, a background server, etc.), which is not limited in this embodiment.
  • the method for processing the area map in this embodiment may be an object labeling scheme in the area map, and there may be various timings for performing object labeling.
  • object labeling may be performed when building a map of an area of interest. In this case, map building and object labeling are performed simultaneously.
  • object labeling may also be performed after the established target area map.
  • object labeling may be performed on existing objects in an established target area map.
  • the above-mentioned object labeling method can be compatible with existing regional map creation schemes, and enrich object information in the target regional map.
  • Object labeling can also be performed on newly added objects in an established target area map.
  • the object labeling method described above can be applied to the scene where a new object is added in the target area, which improves the accuracy of the object information in the area map, and further improves the ability of the area map to represent the area.
  • the target object image corresponding to the target object in the target area is acquired, wherein the target area is the area cleaned by the sweeping robot; the target object is recognized according to the target object image, and the target recognition result is obtained, wherein , the target recognition result contains the description information of the target object; according to the target recognition result, the target object is marked in the target area map, where the target object is the object corresponding to the target object in the target area map, and the target area map is the The area map established for the target area solves the problem of low information acquisition efficiency in the way of building area maps by sweeping robots in related technologies, and improves the accuracy and efficiency of object information acquisition.
  • the above method before acquiring the target object image corresponding to the target object in the target area, the above method further includes:
  • acquiring the image of the target object may be performed when the cleaning robot cleans the target area. If the sweeping robot encounters a target object during cleaning, the above-mentioned data collection component or other sensing components can detect the presence of the target object in the target area. If a target object is detected in the target area, the step of acquiring an image of the target object may be triggered, regardless of whether a target area map has been established for the target area.
  • the cleaning robot may determine whether there is an object corresponding to the target object in the target area map, and whether the object corresponding to the target object has been marked. In the case that there is no object corresponding to the target object in the target area map, or there is an object corresponding to the target object in the target area map, but the object corresponding to the target object is not marked, trigger the execution of acquisition Steps for target object images.
  • the sweeping robot detects an object in the area during the cleaning process, it triggers the acquisition of the object image of the object, and then marks the object corresponding to the object, which can improve the timeliness of object marking.
  • the target object is recognized according to the target object image, and the target recognition result obtained includes:
  • S21 Perform type recognition on the target object according to the target object image to obtain target type information, wherein the target type information is used to indicate the object type of the target object, and the target recognition result includes the target type information.
  • the type of the object is more convenient for the user to obtain the information of the object, that is, it can provide richer object information. Therefore, in this embodiment, in order to improve the convenience of providing object information, the object recognition result may include object type information used to indicate the object type of the target object.
  • the sweeping robot can identify the type of the target object according to the image of the target object, and obtain the information of the target type.
  • marking the target object in the target area map according to the target recognition result may include: marking target type information on a target position matching the target object in the target area map.
  • the target type information which may be text or other expression forms other than text, for example, symbols, patterns and so on.
  • the above-mentioned target position can be above, below, left, right, etc. of the target object, and the distance between the two is less than or equal to the target distance threshold, etc.
  • the type recognition is performed according to the object image, so as to mark the object type on the area map, which can enrich the object information provided by the area map, and improve the convenience of object information acquisition.
  • the target object type is identified according to the target object image, and the target type information obtained includes:
  • the identification method adopted may be a non-AI (Artificial Intelligence, artificial intelligence) identification method.
  • the type of the object may be identified based on AI, for example, the type of the object may be identified using a target recognition model.
  • the object recognition model is a neural network model, and it can also be other types of AI models.
  • the target recognition model can be obtained by using the object image of the sample object to train the initial recognition model.
  • the sample object is marked with the corresponding object type, so that the model parameters of the recognition model can be adjusted according to the output result of the recognition model and the marked object type. Make adjustments to obtain a trained target recognition model.
  • the object image of the sample object may be obtained by collecting data on the sample object using a sweeping robot of the same type as the sweeping robot that acquires the image of the target object.
  • the device for training the recognition model may be a sweeping robot or other devices, which is not limited in this embodiment.
  • the target object image can be input into the target recognition model, and the target recognition model can recognize the type of the target object based on the target object image, and determine the object type of the target object as each candidate among multiple candidate object types The probability of the object type, and then determine the object type that best matches the target object from multiple candidate object types.
  • the target type information indicates the object type that best matches the target object.
  • the type of object is recognized based on AI, and the accuracy of object type recognition can be improved.
  • the target object is recognized according to the target object image, and the target recognition result obtained includes:
  • S41 Perform contour recognition on the target object according to the target object image to obtain target contour information, wherein the target contour information is used to represent the object contour of the target object, and the target recognition result includes the target contour information.
  • the target recognition result may include target outline information used to represent the object outline of the target object.
  • the sweeping robot can recognize the outline of the target object according to the image of the target object, and obtain the target outline information.
  • contour recognition There are many ways to perform contour recognition on the target object image, including but not limited to at least one of the following: image contour extraction, image segmentation, image segmentation based on AI semantics, etc.
  • the contour recognition method is not done. limited.
  • the contour represented by the target contour information may include specific contours of different parts.
  • different parts belonging to the same object can be marked with the same labeling method to reflect that these parts belong to the same object.
  • the outline represented by the target outline information may also include the overall outline of the object, that is, the outline that can include all parts of the object.
  • marking the outline of an object can include all the parts belonging to the object, so as to reflect that these parts belong to the object.
  • contour recognition is performed according to the object image, so that the contour of the object is marked on the area map, and the convenience of object information acquisition can be improved.
  • marking the target object on the target area map according to the target recognition result includes:
  • marking the target object on the target area map according to the target recognition result may include: marking the target object's outline on the target area map according to the target outline information.
  • the contour of the target object may be determined according to the object contour of the target object. For example, the contour of the target object is obtained after operations such as scaling and translation are performed on the object contour of the target object.
  • the outline of the target object can be marked in the form of a virtual wall, or it can be marked in the form of other lines and curves other than the virtual wall.
  • the outline of the target object is marked in the form of a virtual wall in the target map, and the area where the target object is located can be set as the detour area of the sweeping robot (or in other words, the cleaning area is prohibited), so that it can control The sweeping robot avoids the target object during cleaning.
  • the object outlines corresponding to the objects in the area are marked on the area map in the form of a virtual wall, which can improve the convenience of outline marking and facilitate the cleaning control of the sweeping robot.
  • the above method further includes:
  • the established target area map it can be stored on the sweeping robot for cleaning the target area by the sweeping robot.
  • the sweeping robot may also send the map of the target area to the target application logged in with the target account on the target terminal.
  • There can be one or more timings for sending the target area map for example, after the target area map is established, after the target area map is updated, after receiving the map acquisition request of the target application, and other timings that allow the area map to be sent .
  • the display interface for displaying the area map in the target application is the target display interface.
  • the target application can display the target area map on the target display interface.
  • the target application may store the target area map on the target terminal, and display the target area map on the target display interface after detecting the map display instruction.
  • a method for processing an area map is also provided.
  • the above method for processing an area map may be applied to a hardware environment composed of a terminal 102 and a server 104 as shown in FIG. 1 . What has already been described will not be repeated here.
  • FIG. 3 is a schematic flowchart of another optional method for processing an area map according to an embodiment of the present application. As shown in FIG. 3 , the The flow of the method may include the following steps:
  • Step S302 receiving the target area map sent by the sweeping robot through the target application, wherein the target application uses the target account bound to the sweeping robot to log in, and the target area map is an area map established for the target area by the sweeping robot.
  • the method for processing an area map in this embodiment can be applied to a scene where a sweeping robot is used to construct an area map.
  • the sweeping robot, the target application, and the target area map are the same as or similar to those in the foregoing embodiments.
  • the target area map may be an area map created or updated by the area map processing method in the foregoing embodiments, What has already been described will not be repeated here.
  • a target application logged in using the target account may run on it.
  • a communication connection may be established between the target application and the sweeping robot, and through the communication connection between the two, the target application may receive the aforementioned target area map sent by the sweeping robot.
  • Step S304 displaying the target area map on the target display interface of the target application, where the target object and the target labeling information of the target object are displayed on the target area map, and the target object is the target object in the target area map and the target area
  • the target annotation information is used to describe the target object.
  • the display interface for displaying the area map in the target application is the target display interface.
  • the target application can display the target area map on the target display interface.
  • the target application may store the target area map on the target terminal, and display the target area map on the target display interface after detecting the map display instruction.
  • the displayed target area map may include the above-mentioned target object and label information of the target object, that is, target label information.
  • the target annotation information can be used to describe the target object, thereby enriching the object information that can be provided in the area map.
  • the target labeling information may include the labeling information obtained by labeling the target object in the target area map according to the aforementioned target recognition result, and may also contain other labeling information, for example, the labeling information obtained by labeling the target object in other ways
  • the marking information of which is not limited in this embodiment.
  • the target application receives the target area map sent by the sweeping robot, wherein the target application uses the target account bound to the sweeping robot to log in, and the target area map is the area map established for the target area by the sweeping robot;
  • the target area map is displayed on the target display interface of the target application, wherein the target object and the target label information of the target object are displayed on the target area map, and the target object is an object corresponding to the target object in the target area in the target area map , the target labeling information is used to describe the target object, which solves the problem of low information acquisition efficiency in the way of building area maps by sweeping robots in related technologies, and improves the accuracy and efficiency of object information acquisition.
  • displaying the target area map on the target display interface of the target application includes:
  • the target annotation information may include outline information of the target object.
  • the contour information of the target object may be used to represent the contour of the target object, which may include contour information obtained by marking the contour of the target object in the target area map according to the aforementioned target contour information, or may include contour information obtained by other means Outline information of the target object.
  • the target area map displayed on the target display interface may include target objects and outline information of the target objects. If an object contains multiple parts, multiple parts can be marked with the same labeling method to reflect that these parts belong to the same object; or, the outline of the object can contain all parts belonging to the object to reflect that these parts belong to the same object object.
  • the outline of the object is marked on the area map, which can improve the convenience of obtaining object information.
  • displaying the target object and the outline information of the target object in the target area map displayed on the target display interface includes:
  • the outline information of the target object may be displayed in the form of a virtual wall, or in the form of other straight lines or curves other than the virtual wall.
  • the outline information of the target object is displayed in the form of a virtual wall in the target map, and the area where the target object is located can be set as the detour area of the sweeping robot, so that the sweeping robot can be controlled to avoid the target object.
  • the outline of objects in the area map is displayed in the form of a virtual wall, which can improve the convenience of outline display and facilitate the cleaning control of the sweeping robot.
  • displaying the target area map on the target display interface of the target application includes:
  • S91 Display target objects and target type information on the target area map displayed on the target display interface, wherein the target type information is used to indicate the object type of the target object, and the target label information includes target type information.
  • the target labeling information may include target type information.
  • the target type information may be used to indicate the object type of the target object, which may be the target type information obtained by identifying the type of the target object according to the target object image as described above, or may be the target type information obtained in other ways.
  • the target area map displayed on the target display interface may include target object and target type information.
  • the target category information may be displayed on the target location matching the target object.
  • the above-mentioned target position can be above, below, left, right, etc. of the target object, and the distance between the two is less than or equal to the target distance threshold, etc. In this embodiment, there is no limitation on the display method and marking position of the target type information .
  • displaying object types corresponding to objects in the area map on the area map can enrich the object information provided by the area map and improve the convenience of object information acquisition.
  • This example provides a self-labeling solution for virtual walls based on AI technology, which can automatically mark the outline of objects (for example, obstacles) in the form of virtual walls on the App map, and mark the type.
  • objects for example, obstacles
  • the flow of the method for processing an area map in this optional example may include the following steps:
  • Step S402 the sweeping robot detects an object during the cleaning process
  • Step S404 the sweeping robot recognizes the three-dimensional shape of the object, and recognizes the type of the object based on AI;
  • Step S406 mark the outline of the object in the form of a virtual wall in the App map, and mark the type of the object, wherein the object is represented by a two-dimensional image in the App map.
  • the App map displays a two-dimensional image of the object, the type of the object is represented by text, and the dotted line is the virtual wall of the outline of the object.
  • the App map users can see the types of objects in the room and at the same time see the real shape more clearly and accurately.
  • FIG. 6 is a structural block diagram of an optional area map processing device according to an embodiment of the present application. As shown in Fig. 6, the device may include:
  • An acquisition unit 602 configured to acquire an image of a target object corresponding to a target object in a target area, where the target area is an area cleaned by the sweeping robot;
  • the recognition unit 604 is connected to the acquisition unit 602, and is used to recognize the target object according to the image of the target object to obtain a target recognition result, wherein the target recognition result includes description information of the target object;
  • the labeling unit 606 is connected to the recognition unit 604, and is used to mark the target object in the target area map according to the target recognition result, wherein the target object is the object corresponding to the target object in the target area map, and the target area map is A map of the area built by the robot for the target area.
  • the obtaining unit 602 in this embodiment can be used to perform the above step S202
  • the identification unit 604 in this embodiment can be used to perform the above step S204
  • the labeling unit 606 in this embodiment can be used to perform the above Step S206.
  • the target object image corresponding to the target object in the target area is obtained, wherein the target area is the area cleaned by the sweeping robot; the target object is recognized according to the target object image, and the target recognition result is obtained, wherein the target recognition result Contains the description information of the target object; according to the target recognition result, mark the target object in the target area map, where the target object is the object corresponding to the target object in the target area map, and the target area map is established for the target area by the sweeping robot
  • the regional map solves the problem of low information acquisition efficiency in the way of building area maps by sweeping robots in related technologies, and improves the accuracy and efficiency of object information acquisition.
  • the above-mentioned device also includes:
  • the detection unit is configured to detect the existence of the target object in the target area during the process of cleaning the target area by the cleaning robot before acquiring the image of the target object corresponding to the target object in the target area.
  • the identification unit 604 includes:
  • the first identification module is configured to identify the type of the target object according to the image of the target object to obtain target type information, wherein the target type information is used to indicate the object type of the target object, and the target recognition result includes the target type information.
  • the first identification module includes:
  • the input sub-module is used to input the image of the target object into the target recognition model to obtain the target type information output by the target recognition model.
  • the target recognition model is obtained by using the object image of the sample object to train the initial recognition model, and the sample object is marked with the corresponding type of object.
  • the identification unit 604 includes:
  • the second recognition module is configured to perform contour recognition on the target object according to the target object image to obtain target contour information, wherein the target contour information is used to represent the object contour of the target object, and the target recognition result includes the target contour information.
  • the labeling unit 606 includes:
  • the labeling module is used to mark the outline of the target object in the form of a virtual wall in the target area map according to the target outline information.
  • the above-mentioned device also includes:
  • the sending unit is configured to, after marking the target object in the target area map according to the target recognition result, send the target area map to the target application on the target terminal for display, wherein the target application uses the target object bound to the sweeping robot Account login.
  • FIG. 7 is a structural block diagram of another optional area map processing device according to an embodiment of the present application. As shown in Fig. 7, the device may include:
  • the receiving unit 702 is configured to receive the target area map sent by the sweeping robot through the target application, wherein the target application uses the target account bound to the sweeping robot to log in, and the target area map is an area map established for the target area by the sweeping robot;
  • the display unit 704 is connected to the receiving unit 702, and is used to display the target area map on the target display interface of the target application, wherein the target object and the target labeling information of the target object are displayed on the target area map, and the target object is the target area map In , the object corresponding to the target object in the target area, the target annotation information is used to describe the target object.
  • receiving unit 702 in this embodiment can be used to perform the above step S302
  • display unit 704 in this embodiment can be used to perform the above step S304.
  • the target application receives the target area map sent by the sweeping robot, wherein the target application uses the target account bound to the sweeping robot to log in, and the target area map is the area map established for the target area by the sweeping robot; in the target application
  • the target area map is displayed on the target display interface, wherein the target object and the target labeling information of the target object are displayed on the target area map, the target object is the object corresponding to the target object in the target area in the target area map, and the target labeling information It is used to describe the target object, solves the problem of low information acquisition efficiency in the way of building area maps by sweeping robots in related technologies, and improves the accuracy and efficiency of object information acquisition.
  • the display unit 704 includes:
  • the first display module is used to display the target object and the outline information of the target object in the target area map displayed on the target display interface, wherein the outline information of the target object is used to represent the outline of the target object, and the target label information includes the target object profile information.
  • the first display module includes:
  • the display sub-module is used for displaying the target object in the target area map displayed on the target display interface, and displaying the outline information of the target object in the form of a virtual wall.
  • the display unit 704 includes:
  • the second display module is used to display the target object and target type information in the target area map displayed on the target display interface, wherein the target type information is used to indicate the object type of the target object, and the target label information includes target type information.
  • the above modules can run in the hardware environment shown in FIG. 1 , and can be implemented by software or by hardware, wherein the hardware environment includes a network environment.
  • a storage medium is also provided.
  • the above-mentioned storage medium may be used to execute the program code of any one of the above-mentioned area map processing methods in the embodiments of the present application.
  • the foregoing storage medium may be located on at least one network device among the plurality of network devices in the network shown in the foregoing embodiments.
  • the storage medium is configured to store program codes for performing the following steps:
  • the target object is an object corresponding to the target object in the target area map
  • the target area map is an area map established for the target area by the sweeping robot.
  • the above-mentioned storage medium may include, but not limited to, various media capable of storing program codes such as a U disk, ROM, RAM, removable hard disk, magnetic disk, or optical disk.
  • an electronic device for implementing the above method for processing an area map is also provided, and the electronic device may be a server, a terminal, or a combination thereof.
  • Fig. 8 is a structural block diagram of an optional electronic device according to an embodiment of the present application. 804 and memory 806 complete mutual communication through communication bus 808, wherein,
  • the target object is an object corresponding to the target object in the target area map
  • the target area map is an area map established for the target area by the sweeping robot.
  • the communication bus may be a PCI (Peripheral Component Interconnect, Peripheral Component Interconnect Standard) bus, or an EISA (Extended Industry Standard Architecture, Extended Industry Standard Architecture) bus, etc.
  • the communication bus can be divided into an address bus, a data bus, a control bus, and the like. For ease of representation, only one thick line is used in FIG. 8 , but it does not mean that there is only one bus or one type of bus.
  • the communication interface is used for communication between the above-mentioned electronic device and other devices.
  • the above-mentioned memory may include RAM, and may also include non-volatile memory (non-volatile memory), for example, at least one disk memory.
  • non-volatile memory non-volatile memory
  • the memory may also be at least one storage device located away from the aforementioned processor.
  • the above-mentioned memory 806 may include, but is not limited to, the acquiring unit 602, the identifying unit 604, and the labeling unit 606 in the control device of the above-mentioned device. In addition, it may also include but not limited to other module units in the control device of the above equipment, which will not be described in detail in this example.
  • the memory 806 may include, but is not limited to, the receiving unit 702 and the display unit 704 in the control device of the above-mentioned device. In addition, it may also include but not limited to other module units in the control device of the above equipment, which will not be described in detail in this example.
  • processor can be general-purpose processor, can include but not limited to: CPU (Central Processing Unit, central processing unit), NP (Network Processor, network processor) etc.; Can also be DSP (Digital Signal Processing, digital signal processor ), ASIC (Application Specific Integrated Circuit, application specific integrated circuit), FPGA (Field-Programmable Gate Array, field programmable gate array) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
  • CPU Central Processing Unit, central processing unit
  • NP Network Processor, network processor
  • DSP Digital Signal Processing, digital signal processor
  • ASIC Application Specific Integrated Circuit
  • FPGA Field-Programmable Gate Array, field programmable gate array
  • other programmable logic devices discrete gate or transistor logic devices, discrete hardware components.
  • the device implementing the above-mentioned processing method for the area map can be a terminal device, and the terminal device can be a smart phone (such as an Android phone, an iOS phone, etc.), a tablet Computers, PDAs, and mobile Internet devices (Mobile Internet Devices, MID), PAD and other terminal equipment.
  • FIG. 8 does not limit the structure of the above-mentioned electronic device.
  • the electronic device may also include more or less components than those shown in FIG. 8 (such as a network interface, a display device, etc.), or have a different configuration from that shown in FIG. 8 .
  • the integrated units in the above embodiments are realized in the form of software function units and sold or used as independent products, they can be stored in the above computer-readable storage medium.
  • the technical solution of the present application is essentially or part of the contribution to the prior art, or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium.
  • Several instructions are included to make one or more computer devices (which may be personal computers, servers or network devices, etc.) execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the disclosed client can be implemented in other ways.
  • the device embodiments described above are only illustrative, for example, the division of the units is only a logical function division, and there may be other division methods in actual implementation, for example, multiple units or components can be combined or can be Integrate into another system, or some features may be ignored, or not implemented.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces, and the indirect coupling or communication connection of units or modules may be in electrical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place or distributed to multiple network units. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution provided in this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units can be implemented in the form of hardware or in the form of software functional units.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Processing Or Creating Images (AREA)

Abstract

一种区域地图的处理方法及装置、存储介质及电子装置,上述方法包括:获取与目标区域中的目标物体对应的目标物体图像,其中,所述目标区域为扫地机器人所清扫的区域(S202);根据所述目标物体图像对所述目标物体进行识别,得到目标识别结果,其中,所述目标识别结果包含所述目标物体的描述信息(S204);按照所述目标识别结果,在目标区域地图中对目标对象进行标注,其中,所述目标对象为所述目标区域地图中与所述目标物体对应的对象,所述目标区域地图为通过所述扫地机器人为所述目标区域建立的区域地图(S206)。上述方法解决了相关技术中通过扫地机器人构建区域地图的方式存在信息获取效率低的问题。

Description

区域地图的处理方法及装置、存储介质及电子装置
本公开要求如下专利申请的优先权:于2021年06月23日提交中国专利局、申请号为202110701569.0、发明名称为“区域地图的处理方法及装置、存储介质及电子装置”的中国专利申请;上述专利申请的全部内容通过引用结合在本公开中。
【技术领域】
本申请涉及通信领域,具体而言,涉及一种区域地图的处理方法及装置、存储介质及电子装置。
【背景技术】
目前,用户可以使用扫地机器人对其房屋进行清扫。扫地机器人通常通过手机端App(Application,应用)和用户交互,在App上会显示扫地机器人建立的房屋地图。然而,上述房屋地图通常只是显示在一个位置有无物体,用户并不能从地图上获取到准确的物体信息。
由此可见,相关技术中通过扫地机器人构建区域地图的方式,存在信息获取效率低的问题。
【发明内容】
本申请的目的在于提供一种区域地图的处理方法及装置、存储介质及电子装置,以至少解决相关技术中通过扫地机器人构建区域地图的方式存在信息获取效率低的问题。
本申请的目的是通过以下技术方案实现:
根据本申请实施例的一个方面,提供了一种区域地图的处理方法,包括:获取与目标区域中的目标物体对应的目标物体图像,其中,所述目标区域为扫地机器人所清扫的区域;根据所述目标物体图像对所述目标物体进行识别,得到目标识别结果,其中,所述目标识别结果包含所述目标物体的描述信息;按照所述目标识别结果,在目标区域地图中对目标对象进行标注,其中,所述目标对象为所述目标区域地图中与所述目标物体对应的对象,所述目标区域地图为通过所述扫地机器人为所述目标区域建立的区域地图。
在一个示例性实施例中,在获取与所述目标区域中的所述目标物体对应的所述目标物体图像之前,所述方法还包括:在所述扫地机器人对所述目标区域进行清扫的过程中,检测到所述目标区域中存在所述目标物体。
在一个示例性实施例中,根据所述目标物体图像对所述目标物体进行识别,得到所述目标识别结果包括:根据所述目标物体图像对所述目标物体进行种类识别,得到目标种类信息,其中,所述目标种类信息用于表示所述目标物体的物体种类,所述目标识别结果包括所述目标种类信息。
在一个示例性实施例中,根据所述目标物体图像对所述目标物体进行种类识别,得到所述目标种类信息包括:将所述目标物体图像输入到目标识别模型,得到所述目标识别模型输出的所述目标种类信息,所述目标识别模型是使用样本物体的物体图像对初始识别模型进行训练得到的,所述样本物体标注了对应的物体种类。
在一个示例性实施例中,根据所述目标物体图像对所述目标物体进行识别,得到所述目标识别结果包括:根据所述目标物体图像对所述目标物体进行轮廓识别,得到目标轮廓信息,其中,所述目标轮廓信息用于表示所述目标物体的物体轮廓,所述目标识别结果包括所述目标轮廓信息。
在一个示例性实施例中,按照所述目标识别结果,在所述目标区域地图中对所述目标对象进行标注包括:按照所述目标轮廓信息,在所述目标区域地图中以虚拟墙的形式标注出所述目标对象的轮廓。
在一个示例性实施例中,在按照所述目标识别结果,在所述目标区域地图中对所述目标对象进行标注之后,所述方法还包括:将所述目标区域地图发送给目标终端上的目标应用进行显示,其中,所述目标应用使用与所述扫地机器人绑定的目标帐号登录。
根据本申请实施例的另一个方面,还提供了一种区域地图的处理方法,包括:通过目标应用接收扫地机器人发送的目标区域地图,其中,所述目标应用使用与所述扫地机器人绑定的目标帐号登录,所述目标区域地图为通过所述扫地机器人为目标区域建立的区域地图;在所述目标应用的目标显示界面上显示所述目标区域地图,其中,在所述目标区域地图上显示有目标对象以及所述目标对象的目标标注信息,所述目标对象为所述目标区 域地图中,与所述目标区域中的目标物体对应的对象,所述目标标注信息用于描述所述目标对象。
在一个示例性实施例中,在所述目标应用的所述目标显示界面上显示所述目标区域地图包括:在所述目标显示界面上显示的所述目标区域地图中显示所述目标对象和所述目标对象的轮廓信息,其中,所述目标对象的轮廓信息用于表示所述目标对象的轮廓,所述目标标注信息包括所述目标对象的轮廓信息。
在一个示例性实施例中,在所述目标显示界面上显示的所述目标区域地图中显示所述目标对象和所述目标对象的轮廓信息包括:在所述目标显示界面上显示的所述目标区域地图中显示所述目标对象、以及以虚拟墙的形式显示所述目标对象的轮廓信息。
在一个示例性实施例中,在所述目标应用的所述目标显示界面上显示所述目标区域地图包括:在所述目标显示界面上显示的所述目标区域地图中显示所述目标对象和目标种类信息,其中,所述目标种类信息用于表示所述目标物体的物体种类,所述目标标注信息包括所述目标种类信息。
根据本申请实施例的又一个方面,还提供了一种区域地图的处理装置,包括:获取单元,用于获取与目标区域中的目标物体对应的目标物体图像,其中,所述目标区域为扫地机器人所清扫的区域;识别单元,用于根据所述目标物体图像对所述目标物体进行识别,得到目标识别结果,其中,所述目标识别结果包含所述目标物体的描述信息;标注单元,用于按照所述目标识别结果,在目标区域地图中对目标对象进行标注,其中,所述目标对象为所述目标区域地图中与所述目标物体对应的对象,所述目标区域地图为通过所述扫地机器人为所述目标区域建立的区域地图。
在一个示例性实施例中,所述装置还包括:检测单元,用于在在获取与所述目标区域中的所述目标物体对应的所述目标物体图像之前,在所述扫地机器人对所述目标区域进行清扫的过程中,检测到所述目标区域中存在所述目标物体。
在一个示例性实施例中,所述识别单元包括:第一识别模块,用于根据所述目标物体图像对所述目标物体进行种类识别,得到目标种类信息, 其中,所述目标种类信息用于表示所述目标物体的物体种类,所述目标识别结果包括所述目标种类信息。
在一个示例性实施例中,所述第一识别模块包括:输入子模块,用于将所述目标物体图像输入到目标识别模型,得到所述目标识别模型输出的所述目标种类信息,所述目标识别模型是使用样本物体的物体图像对初始识别模型进行训练得到的,所述样本物体标注了对应的物体种类。
在一个示例性实施例中,所述识别单元包括:第二识别模块,用于根据所述目标物体图像对所述目标物体进行轮廓识别,得到目标轮廓信息,其中,所述目标轮廓信息用于表示所述目标物体的物体轮廓,所述目标识别结果包括所述目标轮廓信息。
在一个示例性实施例中,所述标注单元包括:标注模块,用于按照所述目标轮廓信息,在所述目标区域地图中以虚拟墙的形式标注出所述目标对象的轮廓。
在一个示例性实施例中,所述装置还包括:发送单元,用于在按照所述目标识别结果,在所述目标区域地图中对所述目标对象进行标注之后,将所述目标区域地图发送给目标终端上的目标应用进行显示,其中,所述目标应用使用与所述扫地机器人绑定的目标帐号登录。
根据本申请实施例的又一个方面,还提供了一种区域地图的处理装置,包括:接收单元,用于通过目标应用接收扫地机器人发送的目标区域地图,其中,所述目标应用使用与所述扫地机器人绑定的目标帐号登录,所述目标区域地图为通过所述扫地机器人为目标区域建立的区域地图;显示单元,用于在所述目标应用的目标显示界面上显示所述目标区域地图,其中,在所述目标区域地图上显示有目标对象以及所述目标对象的目标标注信息,所述目标对象为所述目标区域地图中,与所述目标区域中的目标物体对应的对象,所述目标标注信息用于描述所述目标对象。
在一个示例性实施例中,所述显示单元包括:第一显示模块,用于在所述目标显示界面上显示的所述目标区域地图中显示所述目标对象和所述目标对象的轮廓信息,其中,所述目标对象的轮廓信息用于表示所述目标对象的轮廓,所述目标标注信息包括所述目标对象的轮廓信息。
在一个示例性实施例中,所述第一显示模块包括:显示子模块,用于在所述目标显示界面上显示的所述目标区域地图中显示所述目标对象、以及以虚拟墙的形式显示所述目标对象的轮廓信息。
在一个示例性实施例中,所述显示单元包括:第二显示模块,用于在所述目标显示界面上显示的所述目标区域地图中显示所述目标对象和目标种类信息,其中,所述目标种类信息用于表示所述目标物体的物体种类,所述目标标注信息包括所述目标种类信息。
在本申请实施例中,采用按照物体的描述信息等对区域地图上对应的对象进行标注的方式,获取与目标区域中的目标物体对应的目标物体图像,其中,目标区域为扫地机器人所清扫的区域;根据目标物体图像对目标物体进行识别,得到目标识别结果,其中,目标识别结果包含目标物体的描述信息;按照目标识别结果,在目标区域地图中对目标对象进行标注,其中,目标对象为目标区域地图中与目标物体对应的对象,目标区域地图为通过扫地机器人为目标区域建立的区域地图,由于按照物体的物体图像对物体进行识别,得到的识别结果包含物体的描述信息(例如,物体种类、物体轮廓等),可以保证得到的描述信息能够准确描述出区域内的物体,同时,由于按照物体的描述信息对区域地图中对应的对象进行标注,可以在区域地图中显示物体对应的对象的同时,显示该对象的标注信息,从而实现从区域地图中快速获取物体信息的目的,达到提高物体信息获取的准确性和效率的技术效果,进而解决相关技术中通过扫地机器人构建区域地图的方式存在信息获取效率低的问题。
【附图说明】
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本申请的实施例,并与说明书一起用于解释本申请的原理。
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,对于本领域普通技术人员而言,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是根据本申请实施例的一种可选的区域地图的处理方法的硬件环 境的示意图;
图2是根据本申请实施例的一种可选的区域地图的处理方法的流程示意图;
图3是根据本申请实施例的另一种可选的区域地图的处理方法的流程示意图;
图4是根据本申请实施例的又一种可选的区域地图的处理方法的流程示意图;
图5是根据本申请实施例的一种可选的区域地图的示意图;
图6是根据本申请实施例的一种可选的区域地图的处理装置的结构框图;
图7是根据本申请实施例的另一种可选的区域地图的处理装置的结构框图;
图8是根据本申请实施例的一种可选的电子装置的结构框图。
【具体实施方式】
下文中将参考附图并结合实施例来详细说明本申请。需要说明的是,在不冲突的情况下,本申请中的实施例及实施例中的特征可以相互组合。
需要说明的是,本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。
根据本申请实施例的一个方面,提供了一种区域地图的处理方法。可选地,在本实施例中,上述区域地图的处理方法可以应用于如图1所示的由终端102和服务器104所构成的硬件环境中。如图1所示,服务器104通过网络与终端102进行连接,可用于为终端或终端上安装的客户端提供服务(如游戏服务、应用服务等),可在服务器上或独立于服务器设置数据库,用于为服务器104提供数据存储服务。
上述终端102可以包含一个或多个,不同的终端之间可以通过服务器进行通信连接,也可以不经过服务器直接进行通信连接。可选地,终端102可以包括以下至少之一:用户终端,清扫设备,该清扫设备可以包含扫地机器人。
上述网络可以包括但不限于以下至少之一:有线网络,无线网络。上述有线网络可以包括但不限于以下至少之一:广域网,城域网,局域网,上述无线网络可以包括但不限于以下至少之一:WIFI(Wireless Fidelity,无线保真),蓝牙。终端102可以并不限定于为PC、手机、平板电脑等。
本申请实施例的区域地图的处理方法可以由服务器104来执行,也可以由终端102来执行,还可以是由服务器104和终端102共同执行。其中,终端102执行本申请实施例的区域地图的处理方法也可以是由安装在其上的客户端来执行。
以由清扫设备来执行本实施例中的区域地图的处理方法为例,图2是根据本申请实施例的一种可选的区域地图的处理方法的流程示意图,如图2所示,该方法的流程可以包括以下步骤:
步骤S202,获取与目标区域中的目标物体对应的目标物体图像,其中,目标区域为扫地机器人所清扫的区域。
本实施例中的区域地图的处理方法可以应用到通过扫地机器人进行区域地图构建的场景中。扫地机器人是具有清扫功能的一类设备的统称,其可以是单独的清扫设备,也可以属于其他的智能设备,本实施例中对此不做具体限定。
本实施例中的区域地图的处理方法可以是由扫地机器人执行的,可以是由扫地机器人所属的智能设备执行的,也可以是由后台服务器执行的,还可以是由其他具备数据处理能力的设备执行的,本实施例中以由扫地机器人执行为例进行说明。
目标用户可以使用目标帐号登录其终端设备(即,目标终端)上运行的目标应用,该目标应用是与扫地机器人匹配的应用,通过使用目标帐号登录的目标应用可以对与该目标帐号绑定的扫地机器人进行控制,以实现目标用户与扫地机器人的交互。例如,通过目标应用可以向目标用户显示扫地机器人建立的区域地图,目标用户可以通过目标应用向扫地机器人发送清扫指令等。
扫地机器人可以用于对目标区域进行清扫,该目标区域可以是某一封闭的区域,例如,卧室、客厅、卫生间等,也可以是不封闭的区域,例如, 室外场地。在目标区域中可以包含目标物体,该目标物体可以是具有一定形态的物体,例如,垃圾桶、电线、拖鞋等,但不限于此,本实施例中对于目标物体的物体种类不做限定。
扫地机器人上可以配置有数据采集部件,该数据采集部件可以是摄像头、红外传感器等,但不限于此,其他具有采集物体图像的部件,均可应用于本实施例。在使用扫地机器人对目标区域进行清扫的过程中、之前、或者之后,该数据采集部件可以执行数据采集操作,所采集到的数据可以是目标物体的目标物体图像。
可选地,目标物体图像可以是目标物体在某一角度的物体图像,也可以是目标物体在多个角度的物体图像。使用目标物体在多个角度的物体图像可以进行多种操作,例如,可以构建出该目标物体的三维形状,从而可以提高识别的准确度。
步骤S204,根据目标物体图像对目标物体进行识别,得到目标识别结果,其中,目标识别结果包含目标物体的描述信息。
目标物体图像可以表示出目标物体的属性信息,例如,物体的形状、颜色、尺寸等。在本实施例中,可以根据目标物体图像可以对目标物体进行识别处理,得到目标识别结果。得到的目标识别结果可以包含目标物体的描述信息,例如,描述目标物体的物体种类、目标物体的物体轮廓等的信息。
作为一种可选的实施方式,上述识别操作可以是由扫地机器人执行的。在获取到上述目标物体图像之后,扫地机器人可以直接根据该目标物体图像执行上述识别操作。
作为另一种可选的实施方式,上述识别操作可以是由扫地机器人所属的智能设备、后台服务器等其他设备执行的。扫地机器人可以将获取到的目标物体图像发送给其他设备,在接收到目标物体图像之后,其他设备可以根据该目标物体图像执行上述识别操作。
可选地,识别操作可以是实时(例如,获取到目标物体图像之后立即执行识别操作)的,也可以是非实时(例如,获取到目标物体图像之后,在空闲时再执行识别操作)的,本实施例中对此不做限定。
目标物体图像可以包含目标物体不同角度的多个物体图像,在进行识别时,可以分别对多个物体图像中的各个物体图像进行识别,得到各个物体图像的识别结果;然后对各个物体图像的识别结果进行融合,得到目标识别结果。可选地,在进行识别时,也可以同时对多个物体图像进行识别,通过融合各个物体图像的图像特征,得到目标识别结果。
可选地,根据目标物体图像对目标物体进行识别可以是:根据目标物体图像构建出与该目标物体对应的目标物体模型,该目标物体模型用于表示该目标物体的三维形状;根据该目标物体模型对目标物体进行识别。
步骤S206,按照目标识别结果,在目标区域地图中对目标对象进行标注,其中,目标对象为目标区域地图中与目标物体对应的对象,目标区域地图为通过扫地机器人为目标区域建立的区域地图。
通过该扫地机器人为目标区域建立的区域地图为目标区域地图。通过扫地机器人建立区域地图是指:使用该扫地机器人对某一区域进行数据采集所得到的数据建立该区域的区域地图。建立区域地图的操作可以是由扫地机器人执行的,也可以是由其他设备(例如,扫地机器人所属的智能设备、后台服务器等)执行的,本实施例中对此不做限定。
本实施例中的区域地图的处理方法可以是一种区域地图中的对象标注方案,执行对象标注的时机可以有多种。例如,对象标注可以是在建立目标区域地图时执行的。在此情况下,地图构建和对象标注是同步执行的。
可选地,对象标注也可以是对已建立的目标区域地图之后执行的。在此情况下,对象标注可以是对已建立的目标区域地图中已有的对象执行的。上述对象标注方式可以兼容已有的区域地图的建立方案,丰富目标区域地图中的物体信息。对象标注也可以是对已建立的目标区域地图中新增的对象执行的。上述对象标注方式可以适用于在目标区域中添加新物体的场景,提高区域地图中的物体信息的准确性,进而提高区域地图表征区域的能力。
通过上述步骤S202至步骤S206,获取与目标区域中的目标物体对应的目标物体图像,其中,目标区域为扫地机器人所清扫的区域;根据目标物体图像对目标物体进行识别,得到目标识别结果,其中,目标识别结果包含目标物体的描述信息;按照目标识别结果,在目标区域地图中对目标对 象进行标注,其中,目标对象为目标区域地图中与目标物体对应的对象,目标区域地图为通过扫地机器人为目标区域建立的区域地图,解决了相关技术中通过扫地机器人构建区域地图的方式存在信息获取效率低的问题,提高了物体信息获取的准确性和效率。
在一个示例性实施例中,在获取与目标区域中的目标物体对应的目标物体图像之前,上述方法还包括:
S11,在扫地机器人对目标区域进行清扫的过程中,检测到目标区域中存在目标物体。
在本实施例中,获取目标物体图像可以是在扫地机器人对目标区域进行清扫的过程中执行的。扫地机器人在清扫的过程中如果遇到目标物体,上述数据采集部件或者其他的感应部件可以检测到目标区域内存在该目标物体。如果检测到目标区域中存在目标物体,可以触发执行获取目标物体图像的步骤,而不管是否已经为目标区域建立了目标区域地图。
可选地,为避免重复对对象进行标注,扫地机器人可以判断该目标区域地图中是否存在与该目标物体对应的对象、以及是否已经对与该目标物体对应的对象进行了标注。在目标区域地图中不存在与该目标物体对应的对象、或者在目标区域地图中存在与该目标物体对应的对象、但并未对与该目标物体对应的对象进行标注的情况下,触发执行获取目标物体图像的步骤。
通过本实施例,扫地机器人在清扫的过程中如果检测到区域内存在物体触发获取物体的物体图像,进而对与该物体对应的对象进行标注,可以对象标注的及时性。
在一个示例性实施例中,根据目标物体图像对目标物体进行识别,得到目标识别结果包括:
S21,根据目标物体图像对目标物体进行种类识别,得到目标种类信息,其中,目标种类信息用于表示目标物体的物体种类,目标识别结果包括目标种类信息。
相较于物体的大小、温度、颜色等,物体种类更便于用户获取物体的信息,即,能够提供更为丰富的物体信息。因此,在本实施例中,为了提 高物体信息提供的便捷性,目标识别结果可以包括用于表示目标物体的物体种类的目标种类信息。
扫地机器人可以根据目标物体图像对目标物体进行种类识别,得到目标种类信息。对应地,按照目标识别结果,在目标区域地图中对目标对象进行标注可以包括:在目标区域地图中与目标对象匹配的目标位置上标注出目标种类信息。
该目标种类信息的标注方式可以有多种,可以是文字,也可以是文字以外的其他表达形式,例如,符号、图案等。上述目标位置可以是目标对象的上方、下方、左侧、右侧等,两者之间的距离小于或者等于目标距离阈值等,本实施例中对于目标种类信息的标注方式以及标注位置不做限定。
通过本实施例,根据物体图像进行种类识别,从而在区域地图对物体种类进行标注,可以丰富区域地图所提供的物体信息,提高物体信息获取的便捷性。
在一个示例性实施例中,根据目标物体图像对目标物体进行种类识别,得到目标种类信息包括:
S31,将目标物体图像输入到目标识别模型,得到目标识别模型输出的目标种类信息,目标识别模型是使用样本物体的物体图像对初始识别模型进行训练得到的,样本物体标注了对应的物体种类。
在对目标物体进行种类识别时,采用的识别方式可以是非AI(Artificial Intelligence,人工智能)的识别方式。可选地,在本实施例中,可以基于AI识别物体的种类,例如,可以使用目标识别模型识别物体的种类。
目标识别模型是一种神经网络模型,也可以是其他类型的AI模型。该目标识别模型可以是使用样本物体的物体图像对初始识别模型进行训练得到的,该样本物体标注了对应的物体种类,从而可以根据识别模型的输出结果和标注的物体种类对识别模型的模型参数进行调整,得到训练好的目标识别模型。
为了提高模型训练的准确性,该样本物体的物体图像可以是使用与获取目标物体图像的扫地机器人相同种类的扫地机器人对样本物体进行数据采集得到的。此外,训练识别模型的设备可以是扫地机器人,也可以是其 他的设备,本实施例中对此不做限定。
在进行物体种类识别时,可以将目标物体图像输入到目标识别模型,该目标识别模型可以基于目标物体图像对目标物体进行种类识别,确定目标物体的物体种类为多个候选物体种类中的各个候选物体种类的概率,进而从多个候选物体种类中确定出与该目标物体最匹配的物体种类。目标种类信息指示的是与该目标物体最匹配的物体种类。
通过本实施例,基于AI识别物体的种类,可以提高物体种类识别的准确度。
在一个示例性实施例中,根据目标物体图像对目标物体进行识别,得到目标识别结果包括:
S41,根据目标物体图像对目标物体进行轮廓识别,得到目标轮廓信息,其中,目标轮廓信息用于表示目标物体的物体轮廓,目标识别结果包括目标轮廓信息。
相关技术中,在区域地图中不会将障碍物的轮廓精确画出,也就无法知道相近的物体是否为同一个物体的不同部位。为了提高物体信息获取的准确性,在本实施例,目标识别结果可以包括用于表示目标物体的物体轮廓的目标轮廓信息。
扫地机器人可以根据目标物体图像对目标物体进行轮廓识别,得到目标轮廓信息。对目标物体图像进行轮廓识别的方式可以有多种,可以包括但不限于以下至少之一:图像轮廓提取,图像分割,基于AI语义的图像分割等,本实施例中对于轮廓识别的方式不做限定。
可选地,对于具有多个不同部位的物体,目标轮廓信息所表示出的轮廓可以包含不同部位的具体轮廓。在进行标注时,属于同一对象的不同部位可以采用相同的标注方式进行标注,以体现这些部位属于同一对象。目标轮廓信息所表示出的轮廓也可以包含该物体的整体轮廓,即,能够包含该物体所有部位的轮廓。在进行标注时,一个对象的轮廓可以包含属于该对象的所有部位,以体现这些部位均属于该对象。
通过本实施例,根据物体图像进行轮廓识别,从而在区域地图中标注出物体的轮廓,可以提高物体信息获取的便捷性。
在一个示例性实施例中,按照目标识别结果,在目标区域地图中对目标对象进行标注包括:
S51,按照目标轮廓信息,在目标区域地图中以虚拟墙的形式标注出目标对象的轮廓。
如果目标识别结果包括上述目标轮廓信息,按照目标识别结果,在目标区域地图中对目标对象进行标注可以包括:按照目标轮廓信息,在目标区域地图中标注出目标对象的轮廓。目标对象的轮廓可以是根据目标物体的物体轮廓确定的,例如,目标对象的轮廓是对目标物体的物体轮廓进行比例缩放、平移等操作之后得到的。
目标对象的轮廓的标注方式可以有多种,可以以虚拟墙的形式标注,也可以以虚拟墙以外的其他直线、曲线的形式标注。在本实施例中,在目标地图中以虚拟墙的形式标注出目标对象的轮廓,可以将目标物体所在的区域范围设置为扫地机器人的绕行区域(或者说,禁止清扫区域),从而可以控制扫地机器人在清扫时避开该目标物体。
通过本实施例,以虚拟墙的形式在区域地图中标注出与区域内的物体对应的对象轮廓,可以提高轮廓标注的便捷性,同时方便对扫地机器人的清扫进行控制。
在一个示例性实施例中,在按照目标识别结果,在目标区域地图中对目标对象进行标注之后,上述方法还包括:
S61,将目标区域地图发送给目标终端上的目标应用进行显示,其中,目标应用使用与扫地机器人绑定的目标帐号登录。
对于建立的目标区域地图,其可以存储在扫地机器人上,以供扫地机器人对目标区域进行清扫。可选地,扫地机器人也可以将目标区域地图发送给上述目标终端上使用目标帐号登录的目标应用。发送目标区域地图的时机可以有一种或者多种,例如,在目标区域地图建立之后,在目标区域地图更新之后,在接收到目标应用的地图获取请求之后,还可以是其他允许发送区域地图的时机。
目标应用中用于显示区域地图的显示界面为目标显示界面。在接收到目标区域地图之后,目标应用可以在目标显示界面上显示目标区域地图。 可选地,目标应用可以将目标区域地图存储到目标终端上,并在检测到地图显示指令之后,在目标显示界面上显示目标区域地图。
通过本实施例,通过将区域地图发送到终端App进行显示,可以便于用户查看扫地机器人构建的区域地图,提高信息显示的便捷性。
根据本申请实施例的另一个方面,还提供了一种区域地图的处理方法。可选地,在本实施例中,上述区域地图的处理方法可以应用于如图1所示的由终端102和服务器104所构成的硬件环境中。已经进行过描述的,在此不做赘述。
以由用户终端来执行本实施例中的区域地图的处理方法为例,图3是根据本申请实施例的另一种可选的区域地图的处理方法的流程示意图,如图3所示,该方法的流程可以包括以下步骤:
步骤S302,通过目标应用接收扫地机器人发送的目标区域地图,其中,目标应用使用与扫地机器人绑定的目标帐号登录,目标区域地图为通过扫地机器人为目标区域建立的区域地图。
本实施例中的区域地图的处理方法可以应用到通过扫地机器人进行区域地图构建的场景中。在本实施例中,扫地机器人、目标应用、目标区域地图与前述实施例中相同或者类似,例如,目标区域地图可以是通过前述实施例中的区域地图的处理方法所建立或者更新的区域地图,已经进行过描述的,在此不做赘述。
对于目标用户的终端设备,即,目标终端,其上可以运行有使用目标帐号登录的目标应用。该目标应用与扫地机器人之间可以建立有通信连接,通过两者之间的通信连接,目标应用可以接收扫地机器人发送的上述目标区域地图。
步骤S304,在目标应用的目标显示界面上显示目标区域地图,其中,在目标区域地图上显示有目标对象以及目标对象的目标标注信息,目标对象为目标区域地图中,与目标区域中的目标物体对应的对象,目标标注信息用于描述目标对象。
目标应用中用于显示区域地图的显示界面为目标显示界面。在接收到 目标区域地图之后,目标应用可以在目标显示界面上显示目标区域地图。可选地,目标应用可以将目标区域地图存储到目标终端上,并在检测到地图显示指令之后,在目标显示界面上显示目标区域地图。
在显示的目标区域地图上可以包含上述目标对象和目标对象的标注信息,即,目标标注信息。该目标标注信息可以用于描述目标对象,从而丰富区域地图中所能够提供的物体信息。可选地,目标标注信息可以包含按照前述目标识别结果在目标区域地图中对目标对象进行标注所得到的标注信息,也可以包含其他的标注信息,例如,通过其他方式对目标对象进行标注所得到的标注信息,本实施例中对此不做限定。
通过上述步骤S302至步骤S304,通过目标应用接收扫地机器人发送的目标区域地图,其中,目标应用使用与扫地机器人绑定的目标帐号登录,目标区域地图为通过扫地机器人为目标区域建立的区域地图;在目标应用的目标显示界面上显示目标区域地图,其中,在目标区域地图上显示有目标对象以及目标对象的目标标注信息,目标对象为目标区域地图中,与目标区域中的目标物体对应的对象,目标标注信息用于描述目标对象,解决相关技术中通过扫地机器人构建区域地图的方式存在信息获取效率低的问题,提高了物体信息获取的准确性和效率。
在一个示例性实施例中,在目标应用的目标显示界面上显示目标区域地图包括:
S71,在目标显示界面上显示的目标区域地图中显示目标对象和目标对象的轮廓信息,其中,目标对象的轮廓信息用于表示目标对象的轮廓,目标标注信息包括目标对象的轮廓信息。
目标标注信息可以包含目标对象的轮廓信息。该目标对象的轮廓信息可以用于表示目标对象的轮廓,其可以包含按照前述目标轮廓信息在目标区域地图中对目标对象的轮廓进行标注所得到的轮廓信息,也可以包含通过其他方式所得到的目标对象的轮廓信息。
在目标显示界面上所显示的目标区域地图中可以包含目标对象以及目标对象的轮廓信息。如果一个对象包含多个部位,多个部位可以采用相同的标注方式进行标注,以体现这些部位属于同一对象;或者,该对象的轮 廓可以包含属于该对象的所有部位,以体现这些部位均属于该对象。
通过本实施例,在区域地图中标注出物体的轮廓,可以提高物体信息获取的便捷性。
在一个示例性实施例中,在目标显示界面上显示的目标区域地图中显示目标对象和目标对象的轮廓信息包括:
S81,在目标显示界面上显示的目标区域地图中显示目标对象、以及以虚拟墙的形式显示目标对象的轮廓信息。
目标对象的轮廓信息的显示方式可以有多种,例如,可以以虚拟墙的形式显示,也可以以虚拟墙以外的其他直线、曲线的形式显示。在本实施例中,在目标地图中以虚拟墙的形式显示出目标对象的轮廓信息,可以将目标物体所在的区域范围设置为扫地机器人的绕行区域,从而可以控制扫地机器人在清扫时避开该目标物体。
通过本实施例,以虚拟墙的形式在区域地图中显示该区域地图内的对象的轮廓,可以提高轮廓显示的便捷性,同时方便对扫地机器人的清扫进行控制。
在一个示例性实施例中,在目标应用的目标显示界面上显示目标区域地图包括:
S91,在目标显示界面上显示的目标区域地图中显示目标对象和目标种类信息,其中,目标种类信息用于表示目标物体的物体种类,目标标注信息包括目标种类信息。
目标标注信息可以包含目标种类信息。该目标种类信息可以用于表示目标物体的物体种类,其可以是按照前述根据目标物体图像对目标物体进行种类识别所得到的目标种类信息,也可以是通过其他方式所得到的目标种类信息。
在目标显示界面上所显示的目标区域地图中可以包含目标对象以及目标种类信息。目标种类信息可以显示在与目标对象匹配的目标位置上。该目标种类信息的显示方式可以有多种,可以是文字,也可以是文字以外的其他表达形式,例如,符号、图案等。上述目标位置可以是目标对象的上方、下方、左侧、右侧等,两者之间的距离小于或者等于目标距离阈值等, 本实施例中对于目标种类信息的显示方式以及标注位置不做限定。
通过本实施例,在区域地图上显示与该区域地图中的对象对应的物体种类,可以丰富区域地图所提供的物体信息,提高物体信息获取的便捷性。
下面结合可选示例对本实施例中的区域地图的处理方法进行解释说明。本示例中提供的是一种基于AI技术的虚拟墙自标注方案,可以在App地图上以虚拟墙的形式自动标注出物体(例如,障碍物)的轮廓,并标注种类。
如图4所示,本可选示例中的区域地图的处理方法的流程可以包括以下步骤:
步骤S402,扫地机器人在清扫过程中检测到物体;
步骤S404,扫地机器人识别出物体的三维形状,并基于AI识别出物体的种类;
步骤S406,在App地图中以虚拟墙的形式标注出物体的轮廓,并标注出物体种类,其中,该物体在App地图中以二维图表示。
如图5所示,App地图中显示有物体的二维图,物体种类以文字表示,虚线为物体轮廓的虚拟墙。在App地图上,用户可以看到房间内物体的种类的同时,更清晰准确的看到真实形状。
通过本示例,通过在App地图上标注出物体的种类和轮廓,可以让用户在App地图上看到更真实准确的物体信息,给用户提供更精准丰富的交互信息,提升体验。
需要说明的是,对于前述的各方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本申请并不受所描述的动作顺序的限制,因为依据本申请,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作和模块并不一定是本申请所必须的。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到根据上述实施例的方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软 件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM(Read-Only Memory,只读存储器)/RAM(Random Access Memory,随机存取存储器)、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,或者网络设备等)执行本申请各个实施例所述的方法。
根据本申请实施例的又一个方面,还提供了一种用于实施上述区域地图的处理方法的区域地图的处理装置。图6是根据本申请实施例的一种可选的区域地图的处理装置的结构框图,如图6所示,该装置可以包括:
获取单元602,用于获取与目标区域中的目标物体对应的目标物体图像,其中,目标区域为扫地机器人所清扫的区域;
识别单元604,与获取单元602相连,用于根据目标物体图像对目标物体进行识别,得到目标识别结果,其中,目标识别结果包含目标物体的描述信息;
标注单元606,与识别单元604相连,用于按照目标识别结果,在目标区域地图中对目标对象进行标注,其中,目标对象为目标区域地图中与目标物体对应的对象,目标区域地图为通过扫地机器人为目标区域建立的区域地图。
需要说明的是,该实施例中的获取单元602可以用于执行上述步骤S202,该实施例中的识别单元604可以用于执行上述步骤S204,该实施例中的标注单元606可以用于执行上述步骤S206。
通过上述模块,获取与目标区域中的目标物体对应的目标物体图像,其中,目标区域为扫地机器人所清扫的区域;根据目标物体图像对目标物体进行识别,得到目标识别结果,其中,目标识别结果包含目标物体的描述信息;按照目标识别结果,在目标区域地图中对目标对象进行标注,其中,目标对象为目标区域地图中与目标物体对应的对象,目标区域地图为通过扫地机器人为目标区域建立的区域地图,解决了相关技术中通过扫地机器人构建区域地图的方式存在信息获取效率低的问题,提高了物体信息获取的准确性和效率。
在一个示例性实施例中,上述装置还包括:
检测单元,用于在获取与目标区域中的目标物体对应的目标物体图像之前,在扫地机器人对目标区域进行清扫的过程中,检测到目标区域中存在目标物体。
在一个示例性实施例中,识别单元604包括:
第一识别模块,用于根据目标物体图像对目标物体进行种类识别,得到目标种类信息,其中,目标种类信息用于表示目标物体的物体种类,目标识别结果包括目标种类信息。
在一个示例性实施例中,第一识别模块包括:
输入子模块,用于将目标物体图像输入到目标识别模型,得到目标识别模型输出的目标种类信息,目标识别模型是使用样本物体的物体图像对初始识别模型进行训练得到的,样本物体标注了对应的物体种类。
在一个示例性实施例中,识别单元604包括:
第二识别模块,用于根据目标物体图像对目标物体进行轮廓识别,得到目标轮廓信息,其中,目标轮廓信息用于表示目标物体的物体轮廓,目标识别结果包括目标轮廓信息。
在一个示例性实施例中,标注单元606包括:
标注模块,用于按照目标轮廓信息,在目标区域地图中以虚拟墙的形式标注出目标对象的轮廓。
在一个示例性实施例中,上述装置还包括:
发送单元,用于在按照目标识别结果,在目标区域地图中对目标对象进行标注之后,将目标区域地图发送给目标终端上的目标应用进行显示,其中,目标应用使用与扫地机器人绑定的目标帐号登录。
根据本申请实施例的又一个方面,还提供了一种用于实施上述区域地图的处理方法的区域地图的处理装置。图7是根据本申请实施例的另一种可选的区域地图的处理装置的结构框图,如图7所示,该装置可以包括:
接收单元702,用于通过目标应用接收扫地机器人发送的目标区域地图,其中,目标应用使用与扫地机器人绑定的目标帐号登录,目标区域地图为通过扫地机器人为目标区域建立的区域地图;
显示单元704,与接收单元702相连,用于在目标应用的目标显示界面上显示目标区域地图,其中,在目标区域地图上显示有目标对象以及目标对象的目标标注信息,目标对象为目标区域地图中,与目标区域中的目标物体对应的对象,目标标注信息用于描述目标对象。
需要说明的是,该实施例中的接收单元702可以用于执行上述步骤S302,该实施例中的显示单元704可以用于执行上述步骤S304。
通过上述模块,通过目标应用接收扫地机器人发送的目标区域地图,其中,目标应用使用与扫地机器人绑定的目标帐号登录,目标区域地图为通过扫地机器人为目标区域建立的区域地图;在目标应用的目标显示界面上显示目标区域地图,其中,在目标区域地图上显示有目标对象以及目标对象的目标标注信息,目标对象为目标区域地图中,与目标区域中的目标物体对应的对象,目标标注信息用于描述目标对象,解决相关技术中通过扫地机器人构建区域地图的方式存在信息获取效率低的问题,提高了物体信息获取的准确性和效率。
在一个示例性实施例中,显示单元704包括:
第一显示模块,用于在目标显示界面上显示的目标区域地图中显示目标对象和目标对象的轮廓信息,其中,目标对象的轮廓信息用于表示目标对象的轮廓,目标标注信息包括目标对象的轮廓信息。
在一个示例性实施例中,第一显示模块包括:
显示子模块,用于在目标显示界面上显示的目标区域地图中显示目标对象、以及以虚拟墙的形式显示目标对象的轮廓信息。
在一个示例性实施例中,显示单元704包括:
第二显示模块,用于在目标显示界面上显示的目标区域地图中显示目标对象和目标种类信息,其中,目标种类信息用于表示目标物体的物体种类,目标标注信息包括目标种类信息。
此处需要说明的是,上述模块与对应的步骤所实现的示例和应用场景相同,但不限于上述实施例所公开的内容。需要说明的是,上述模块作为装置的一部分可以运行在如图1所示的硬件环境中,可以通过软件实现,也可以通过硬件实现,其中,硬件环境包括网络环境。
根据本申请实施例的又一个方面,还提供了一种存储介质。可选地,在本实施例中,上述存储介质可以用于执行本申请实施例中上述任一项区域地图的处理方法的程序代码。
可选地,在本实施例中,上述存储介质可以位于上述实施例所示的网络中的多个网络设备中的至少一个网络设备上。
可选地,在本实施例中,存储介质被设置为存储用于执行以下步骤的程序代码:
S1,获取与目标区域中的目标物体对应的目标物体图像,其中,目标区域为扫地机器人所清扫的区域;
S2,根据目标物体图像对目标物体进行识别,得到目标识别结果,其中,目标识别结果包含目标物体的描述信息;
S3,按照目标识别结果,在目标区域地图中对目标对象进行标注,其中,目标对象为目标区域地图中与目标物体对应的对象,目标区域地图为通过扫地机器人为目标区域建立的区域地图。
或者,被设置为存储用于执行以下步骤的程序代码:
S1,通过目标应用接收扫地机器人发送的目标区域地图,其中,目标应用使用与扫地机器人绑定的目标帐号登录,目标区域地图为通过扫地机器人为目标区域建立的区域地图;
S2,在目标应用的目标显示界面上显示目标区域地图,其中,在目标区域地图上显示有目标对象以及目标对象的目标标注信息,目标对象为目标区域地图中,与目标区域中的目标物体对应的对象,目标标注信息用于描述目标对象。
可选地,本实施例中的具体示例可以参考上述实施例中所描述的示例,本实施例中对此不再赘述。
可选地,在本实施例中,上述存储介质可以包括但不限于:U盘、ROM、RAM、移动硬盘、磁碟或者光盘等各种可以存储程序代码的介质。
根据本申请实施例的又一个方面,还提供了一种用于实施上述区域地 图的处理方法的电子装置,该电子装置可以是服务器、终端、或者其组合。
图8是根据本申请实施例的一种可选的电子装置的结构框图,如图8所示,包括处理器802、通信接口804、存储器806和通信总线808,其中,处理器802、通信接口804和存储器806通过通信总线808完成相互间的通信,其中,
存储器806,用于存储计算机程序;
处理器802,用于执行存储器806上所存放的计算机程序时,实现如下步骤:
S1,获取与目标区域中的目标物体对应的目标物体图像,其中,目标区域为扫地机器人所清扫的区域;
S2,根据目标物体图像对目标物体进行识别,得到目标识别结果,其中,目标识别结果包含目标物体的描述信息;
S3,按照目标识别结果,在目标区域地图中对目标对象进行标注,其中,目标对象为目标区域地图中与目标物体对应的对象,目标区域地图为通过扫地机器人为目标区域建立的区域地图。
或者,处理器802,用于执行存储器806上所存放的计算机程序时,实现如下步骤:
S1,通过目标应用接收扫地机器人发送的目标区域地图,其中,目标应用使用与扫地机器人绑定的目标帐号登录,目标区域地图为通过扫地机器人为目标区域建立的区域地图;
S2,在目标应用的目标显示界面上显示目标区域地图,其中,在目标区域地图上显示有目标对象以及目标对象的目标标注信息,目标对象为目标区域地图中,与目标区域中的目标物体对应的对象,目标标注信息用于描述目标对象。
可选地,在本实施例中,通信总线可以是PCI(Peripheral Component Interconnect,外设部件互连标准)总线、或EISA(Extended Industry Standard Architecture,扩展工业标准结构)总线等。该通信总线可以分为地址总线、数据总线、控制总线等。为便于表示,图8中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线。通信接口用于上述电子装置与其他设 备之间的通信。
上述的存储器可以包括RAM,也可以包括非易失性存储器(non-volatile memory),例如,至少一个磁盘存储器。可选地,存储器还可以是至少一个位于远离前述处理器的存储装置。
作为一种示例,上述存储器806中可以但不限于包括上述设备的控制装置中的获取单元602、识别单元604、以及标注单元606。此外,还可以包括但不限于上述设备的控制装置中的其他模块单元,本示例中不再赘述。
作为另一种示例,上述存储器806中可以但不限于包括上述设备的控制装置中的接收单元702、以及显示单元704。此外,还可以包括但不限于上述设备的控制装置中的其他模块单元,本示例中不再赘述。
上述处理器可以是通用处理器,可以包含但不限于:CPU(Central Processing Unit,中央处理器)、NP(Network Processor,网络处理器)等;还可以是DSP(Digital Signal Processing,数字信号处理器)、ASIC(Application Specific Integrated Circuit,专用集成电路)、FPGA(Field-Programmable Gate Array,现场可编程门阵列)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。
可选地,本实施例中的具体示例可以参考上述实施例中所描述的示例,本实施例在此不再赘述。
本领域普通技术人员可以理解,图8所示的结构仅为示意,实施上述区域地图的处理方法的设备可以是终端设备,该终端设备可以是智能手机(如Android手机、iOS手机等)、平板电脑、掌上电脑以及移动互联网设备(Mobile Internet Devices,MID)、PAD等终端设备。图8其并不对上述电子装置的结构造成限定。例如,电子装置还可包括比图8中所示更多或者更少的组件(如网络接口、显示装置等),或者具有与图8所示的不同的配置。
本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序来指令终端设备相关的硬件来完成,该程序可以存储于一计算机可读存储介质中,存储介质可以包括:闪存盘、ROM、RAM、磁盘或光盘等。
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。
上述实施例中的集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在上述计算机可读取的存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在存储介质中,包括若干指令用以使得一台或多台计算机设备(可为个人计算机、服务器或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。
在本申请的上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
在本申请所提供的几个实施例中,应该理解到,所揭露的客户端,可通过其它的方式实现。其中,以上所描述的装置实施例仅仅是示意性的,例如所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个***,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,单元或模块的间接耦合或通信连接,可以是电性或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例中所提供的方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
以上所述仅是本申请的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本申请原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本申请的保护范围。

Claims (16)

  1. 一种区域地图的处理方法,其特征在于,包括:
    获取与目标区域中的目标物体对应的目标物体图像,其中,所述目标区域为扫地机器人所清扫的区域;
    根据所述目标物体图像对所述目标物体进行识别,得到目标识别结果,其中,所述目标识别结果包含所述目标物体的描述信息;
    按照所述目标识别结果,在目标区域地图中对目标对象进行标注,其中,所述目标对象为所述目标区域地图中与所述目标物体对应的对象,所述目标区域地图为通过所述扫地机器人为所述目标区域建立的区域地图。
  2. 如权利要求1所述的方法,其特征在于,在获取与所述目标区域中的所述目标物体对应的所述目标物体图像之前,所述方法还包括:
    在所述扫地机器人对所述目标区域进行清扫的过程中,检测到所述目标区域中存在所述目标物体。
  3. 如权利要求1所述的方法,其特征在于,根据所述目标物体图像对所述目标物体进行识别,得到所述目标识别结果包括:
    根据所述目标物体图像对所述目标物体进行种类识别,得到目标种类信息,其中,所述目标种类信息用于表示所述目标物体的物体种类,所述目标识别结果包括所述目标种类信息。
  4. 如权利要求3所述的方法,其特征在于,根据所述目标物体图像对所述目标物体进行种类识别,得到所述目标种类信息包括:
    将所述目标物体图像输入到目标识别模型,得到所述目标识别模型输出的所述目标种类信息,所述目标识别模型是使用样本物体的物体图像对初始识别模型进行训练得到的,所述样本物体标注了对应的物体种类。
  5. 如权利要求1所述的方法,其特征在于,根据所述目标物体图像对所述目标物体进行识别,得到所述目标识别结果包括:
    根据所述目标物体图像对所述目标物体进行轮廓识别,得到目标轮廓信息,其中,所述目标轮廓信息用于表示所述目标物体的物体轮廓,所述目标识别结果包括所述目标轮廓信息。
  6. 如权利要求5所述的方法,其特征在于,按照所述目标识别结果,在所述目标区域地图中对所述目标对象进行标注包括:
    按照所述目标轮廓信息,在所述目标区域地图中以虚拟墙的形式标注出所述目标对象的轮廓。
  7. 如权利要求5所述的方法,其特征在于,所述目标物体具有多个部位,
    所述按照所述目标识别结果,在目标区域地图中对目标对象进行标注,包括:
    按照所述目标识别结果,在目标区域地图中对所述目标物体的所述部位以相同的标注方式进行标注,和/或
    按照所述目标识别结果,在所述目标区域地图中对所述目标物体的轮廓进行标注,其中标注的所述轮廓包括了所述目标物体的所有所述部位。
  8. 如权利要求1至7中任一项所述的方法,其特征在于,在按照所述目标识别结果,在所述目标区域地图中对所述目标对象进行标注之后,所述方法还包括:
    将所述目标区域地图发送给目标终端上的目标应用进行显示,其中,所述目标应用使用与所述扫地机器人绑定的目标帐号登录。
  9. 一种区域地图的处理方法,其特征在于,包括:
    通过目标应用接收扫地机器人发送的目标区域地图,其中,所述目标应用使用与所述扫地机器人绑定的目标帐号登录,所述目标区域地图为通过所述扫地机器人为目标区域建立的区域地图;
    在所述目标应用的目标显示界面上显示所述目标区域地图,其中,在所述目标区域地图上显示有目标对象以及所述目标对象的目标标注信息,所述目标对象为所述目标区域地图中与所述目标区域中的目标物体对应的对象,所述目标标注信息用于描述所述目标对象。
  10. 如权利要求9所述的方法,其特征在于,在所述目标应用的所述目标显示界面上显示所述目标区域地图包括:
    在所述目标显示界面上显示的所述目标区域地图中显示所述目标对象和所述目标对象的轮廓信息,其中,所述目标对象的轮廓信息用于表示所述目标对象的轮廓,所述目标标注信息包括所述目标对象的轮廓信息。
  11. 如权利要求10所述的方法,其特征在于,在所述目标显示界面上显示的所述目标区域地图中显示所述目标对象和所述目标对象的轮廓信息 包括:
    在所述目标显示界面上显示的所述目标区域地图中显示所述目标对象、以及以虚拟墙的形式显示所述目标对象的轮廓信息。
  12. 如权利要求9至11中任一项所述的方法,其特征在于,在所述目标应用的所述目标显示界面上显示所述目标区域地图包括:
    在所述目标显示界面上显示的所述目标区域地图中显示所述目标对象和目标种类信息,其中,所述目标种类信息用于表示所述目标物体的物体种类,所述目标标注信息包括所述目标种类信息。
  13. 一种区域地图的处理装置,其特征在于,包括:
    获取单元,用于获取与目标区域中的目标物体对应的目标物体图像,其中,所述目标区域为扫地机器人所清扫的区域;
    识别单元,用于根据所述目标物体图像对所述目标物体进行识别,得到目标识别结果,其中,所述目标识别结果包含所述目标物体的描述信息;
    标注单元,用于按照所述目标识别结果,在目标区域地图中对目标对象进行标注,其中,所述目标对象为所述目标区域地图中与所述目标物体对应的对象,所述目标区域地图为通过所述扫地机器人为所述目标区域建立的区域地图。
  14. 一种区域地图的处理装置,其特征在于,包括:
    接收单元,用于通过目标应用接收扫地机器人发送的目标区域地图,其中,所述目标应用使用与所述扫地机器人绑定的目标帐号登录,所述目标区域地图为通过所述扫地机器人为目标区域建立的区域地图;
    显示单元,用于在所述目标应用的目标显示界面上显示所述目标区域地图,其中,在所述目标区域地图上显示有目标对象以及所述目标对象的目标标注信息,所述目标对象为所述目标区域地图中,与所述目标区域中的目标物体对应的对象,所述目标标注信息用于描述所述目标对象。
  15. 一种计算机可读的存储介质,其特征在于,所述计算机可读的存储介质包括存储的程序,其中,所述程序运行时执行权利要求1至8中任一项所述的方法、或者权利要求9至12中任一项所述的方法。
  16. 一种电子装置,包括存储器和处理器,其特征在于,所述存储器 中存储有计算机程序,所述处理器被设置为通过所述计算机程序执行权利要求1至8中任一项所述的方法、或者权利要求9至12中任一项所述的方法。
PCT/CN2022/094615 2021-06-23 2022-05-24 区域地图的处理方法及装置、存储介质及电子装置 WO2022267795A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110701569.0A CN113469000B (zh) 2021-06-23 2021-06-23 区域地图的处理方法及装置、存储介质及电子装置
CN202110701569.0 2021-06-23

Publications (1)

Publication Number Publication Date
WO2022267795A1 true WO2022267795A1 (zh) 2022-12-29

Family

ID=77872542

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/094615 WO2022267795A1 (zh) 2021-06-23 2022-05-24 区域地图的处理方法及装置、存储介质及电子装置

Country Status (2)

Country Link
CN (1) CN113469000B (zh)
WO (1) WO2022267795A1 (zh)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113469000B (zh) * 2021-06-23 2024-06-14 追觅创新科技(苏州)有限公司 区域地图的处理方法及装置、存储介质及电子装置
CN116211168A (zh) * 2021-12-02 2023-06-06 追觅创新科技(苏州)有限公司 清洁设备的运行控制方法及装置、存储介质及电子装置
CN114521841A (zh) * 2022-03-23 2022-05-24 深圳市优必选科技股份有限公司 打扫区域管理方法、***、智能终端、机器人及存储介质
CN116091607B (zh) * 2023-04-07 2023-09-26 科大讯飞股份有限公司 辅助用户寻找物体的方法、装置、设备及可读存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108885459A (zh) * 2018-06-08 2018-11-23 珊口(深圳)智能科技有限公司 导航方法、导航***、移动控制***及移动机器人
CN111242994A (zh) * 2019-12-31 2020-06-05 深圳优地科技有限公司 一种语义地图构建方法、装置、机器人及存储介质
CN111325136A (zh) * 2020-02-17 2020-06-23 北京小马智行科技有限公司 智能车辆中物体对象的标注方法及装置、无人驾驶车辆
US20210049376A1 (en) * 2019-08-14 2021-02-18 Ankobot (Shenzhen) Smart Technologies Co., Ltd. Mobile robot, control method and control system thereof
CN113469000A (zh) * 2021-06-23 2021-10-01 追觅创新科技(苏州)有限公司 区域地图的处理方法及装置、存储介质及电子装置

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102061511B1 (ko) * 2013-04-26 2020-01-02 삼성전자주식회사 청소 로봇, 홈 모니터링 장치 및 그 제어 방법
CN112867424B (zh) * 2019-03-21 2022-05-06 深圳阿科伯特机器人有限公司 导航、划分清洁区域方法及***、移动及清洁机器人
CN111839360B (zh) * 2020-06-22 2021-09-14 珠海格力电器股份有限公司 扫地机数据处理方法、装置、设备及计算机可读介质
CN112307994A (zh) * 2020-11-04 2021-02-02 深圳市普森斯科技有限公司 基于扫地机的障碍物识别方法、电子装置及存储介质
CN112462780B (zh) * 2020-11-30 2024-05-21 深圳市杉川致行科技有限公司 扫地控制方法、装置、扫地机器人及计算机可读存储介质
CN112783156A (zh) * 2020-12-25 2021-05-11 北京小狗吸尘器集团股份有限公司 扫地机器人及其清扫任务规划方法和装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108885459A (zh) * 2018-06-08 2018-11-23 珊口(深圳)智能科技有限公司 导航方法、导航***、移动控制***及移动机器人
US20210049376A1 (en) * 2019-08-14 2021-02-18 Ankobot (Shenzhen) Smart Technologies Co., Ltd. Mobile robot, control method and control system thereof
CN111242994A (zh) * 2019-12-31 2020-06-05 深圳优地科技有限公司 一种语义地图构建方法、装置、机器人及存储介质
CN111325136A (zh) * 2020-02-17 2020-06-23 北京小马智行科技有限公司 智能车辆中物体对象的标注方法及装置、无人驾驶车辆
CN113469000A (zh) * 2021-06-23 2021-10-01 追觅创新科技(苏州)有限公司 区域地图的处理方法及装置、存储介质及电子装置

Also Published As

Publication number Publication date
CN113469000A (zh) 2021-10-01
CN113469000B (zh) 2024-06-14

Similar Documents

Publication Publication Date Title
WO2022267795A1 (zh) 区域地图的处理方法及装置、存储介质及电子装置
CN106203242B (zh) 一种相似图像识别方法及设备
CN109995601B (zh) 一种网络流量识别方法及装置
CN111832447B (zh) 建筑图纸构件识别方法、电子设备及相关产品
JP2022504704A (ja) ターゲット検出方法、モデル訓練方法、装置、機器及びコンピュータプログラム
WO2020223975A1 (zh) 在地图上定位设备的方法、服务端及移动机器人
CN108416003A (zh) 一种图片分类方法和装置、终端、存储介质
CN111708366B (zh) 机器人及其行动控制方法、装置和计算机可读存储介质
CN108875667B (zh) 目标识别方法、装置、终端设备和存储介质
CN110175223A (zh) 一种实现问题生成的方法及装置
TWI774271B (zh) 關鍵點檢測方法、電子設備及電腦可讀儲存介質
US20200074175A1 (en) Object cognitive identification solution
WO2019223056A1 (zh) 基于手势识别的教学互动方法以及装置
CN111783561A (zh) 审图结果修正方法、电子设备及相关产品
CN111832579A (zh) 地图兴趣点数据处理方法、装置、电子设备以及可读介质
US20220300774A1 (en) Methods, apparatuses, devices and storage media for detecting correlated objects involved in image
CN111703278B (zh) 香氛释放方法、装置、车端、云端、***和存储介质
CN111685657A (zh) 一种信息交互的控制方法及***
US20210304452A1 (en) Method and system for providing avatar service
WO2022022292A1 (zh) 手持物体识别方法及装置
CN113469138A (zh) 对象检测方法和装置、存储介质及电子设备
CN106778449B (zh) 动态影像的物件辨识方法及自动截取目标图像的互动式影片建立方法
CN114694257A (zh) 多人实时三维动作识别评估方法、装置、设备及介质
CN111061451A (zh) 一种信息处理方法及装置、***
CN111382626B (zh) 视频中违规图像的检测方法、装置、设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22827292

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22827292

Country of ref document: EP

Kind code of ref document: A1