CN111583336A - Robot and inspection method and device thereof - Google Patents

Robot and inspection method and device thereof Download PDF

Info

Publication number
CN111583336A
CN111583336A CN202010322489.XA CN202010322489A CN111583336A CN 111583336 A CN111583336 A CN 111583336A CN 202010322489 A CN202010322489 A CN 202010322489A CN 111583336 A CN111583336 A CN 111583336A
Authority
CN
China
Prior art keywords
pedestrian
robot
image
area
detected
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010322489.XA
Other languages
Chinese (zh)
Other versions
CN111583336B (en
Inventor
刘业鹏
庞建新
熊友军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ubtech Robotics Corp
Original Assignee
Ubtech Robotics Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ubtech Robotics Corp filed Critical Ubtech Robotics Corp
Priority to CN202010322489.XA priority Critical patent/CN111583336B/en
Publication of CN111583336A publication Critical patent/CN111583336A/en
Application granted granted Critical
Publication of CN111583336B publication Critical patent/CN111583336B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Processing (AREA)

Abstract

A robot inspection method includes: acquiring an image to be detected around the robot through a camera; identifying the position of a pedestrian in the image to be detected; and when the identified pedestrian is located in a pre-calibrated safe area, generating prompt information. When can make the robot patrol and examine under the more scene of pedestrian, discovery dangerous condition that can be timely and generate tip information can the emergence of effectual reduction dangerous accident, improve the security of patrolling and examining.

Description

Robot and inspection method and device thereof
Technical Field
The application belongs to the field of robots, and particularly relates to a robot and a polling method and device thereof.
Background
The inspection robot is mainly applied to the scenes with severe environment. Such as a substation, an oil pipeline, a desert photovoltaic power station. The inspection robot can be used to replace manual work to identify instrument and component faults. With the increase of campus emergencies in recent years, the inspection robot has application value in the field of campus security. Because the environment in campus is mostly flat ground, it is more suitable to select wheeled robot.
The campus inspection robot runs in a campus, the population density in the campus is large, a plurality of children of low ages can observe the robot in a close range very curiously, the driving speed of the wheel type inspection robot is usually high, when the obstacle is detected through the laser radar, more computing resources are needed, the computing speed is low, the robot and the person collide with each other or other accidents are easily caused, and the safety of the robot inspection is not improved.
Disclosure of Invention
In view of this, the embodiment of the application provides a robot and an inspection method and an inspection device thereof, so as to solve the problem that when a wheeled inspection robot in the prior art inspects in scenes such as a campus and the like, the wheeled inspection robot is high in speed, so that the robot is easy to collide with each other, and the inspection safety is not improved.
The first aspect of the embodiment of the application provides a method for patrolling a robot, which comprises the following steps:
acquiring an image to be detected around the robot through a camera;
identifying the position of a pedestrian in the image to be detected;
and when the identified pedestrian is located in a pre-calibrated safe area, generating prompt information.
With reference to the first aspect, in a first possible implementation manner of the first aspect, the identifying a position where a pedestrian is located in the image to be detected includes:
acquiring an acquired image, and inputting the acquired image into a trained pedestrian detection network model;
and calculating the area of the pedestrian in the image to be detected according to the trained pedestrian detection network model, and determining the position of the pedestrian according to the area of the pedestrian.
With reference to the first possible implementation manner of the first aspect, in a second possible implementation manner of the first aspect, before the step of acquiring the acquired image and inputting the acquired image into the trained pedestrian detection network model, the method further includes:
acquiring a sample image including a pedestrian and a calibration area where the pedestrian is located in the sample image;
performing convolution on the sample image through a first convolution kernel to obtain a first characteristic image;
performing second convolution kernel convolution on the first feature map to obtain a second feature map, and performing third convolution kernel convolution on the second feature map to obtain a third feature map;
pooling the first characteristic diagram to obtain a fourth characteristic diagram;
and fusing the third characteristic diagram and the fourth characteristic diagram, obtaining an identification area of the pedestrian in the sample image through fourth convolution kernel convolution and full connection, and optimizing parameters of the pedestrian detection network model according to the difference between the calibration area and the identification area until the difference meets the preset requirement.
With reference to the second possible implementation manner of the first aspect, in a third possible implementation manner of the first aspect, the first convolution kernel and the third convolution kernel have a size of 3 × 3, and the second convolution Liao and the fourth convolution kernel have a size of 1 × 1.
With reference to the first aspect, in a fourth possible implementation manner of the first aspect, before the step of generating the prompt message when the identified pedestrian is located in the pre-calibrated safe area, the method further includes:
acquiring a calibration image comprising a safety line, wherein the distance between the safety line in the calibration image and the robot is a preset safety distance;
and calibrating a corresponding safety region in the image acquired by the camera according to the position of the safety line in the calibrated image.
With reference to the first aspect, in a fifth possible implementation manner of the first aspect, when the identified pedestrian is located in a pre-calibrated safe area, the step of generating the prompt message includes:
if the pedestrian entering the safe area is detected, generating a pedestrian entering prompt;
and/or generating a warning reminder if the pedestrian entering the safety area is detected to have a duration greater than a predetermined duration.
With reference to the first aspect, in a sixth possible implementation manner of the first aspect, the acquiring, by a camera, an image to be measured around the robot includes:
collecting a plurality of groups of video streams through a camera group, wherein the camera group comprises cameras arranged at the front part, the rear part, the left part and the right part of the robot;
and analyzing to obtain the image to be detected according to the plurality of groups of collected video streams.
A second aspect of the embodiments of the present application provides an inspection device of a robot, the inspection device of the robot includes:
the to-be-detected image acquisition unit is used for acquiring to-be-detected images around the robot through the camera;
the area identification unit is used for identifying the position of the pedestrian in the image to be detected;
and the prompting unit is used for generating prompting information when the identified pedestrian is positioned in a pre-calibrated safe area.
A third aspect of the embodiments of the present application provides a robot, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the inspection method of the robot according to any one of the first aspect when executing the computer program.
A fourth aspect of embodiments of the present application provides a computer-readable storage medium, which stores a computer program that, when executed by a processor, implements the steps of the inspection method for a robot according to any one of the first aspect.
Compared with the prior art, the embodiment of the application has the advantages that: through the image that awaits measuring around the collection robot to the position at pedestrian place in the image that awaits measuring of discernment, when the position at pedestrian place of discernment gets into the safe region of demarcation in advance, generate the tip information, thereby can make the robot patrol and examine under the more scene of people of going, discovery dangerous condition that can be timely and generate the tip information, and the arithmetic speed based on vision is faster, computational resource still less, detection and warning that can be more quick, can effectually reduce the emergence of dangerous accident, the security that the improvement was patrolled and examined.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic flow chart of an implementation of an inspection method of a robot according to an embodiment of the present disclosure;
fig. 2 is a schematic view illustrating an installation of a robot camera according to an embodiment of the present disclosure;
fig. 3 is a schematic flow chart of an implementation of a training method for a pedestrian detection network model according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a pedestrian detection network model provided by an embodiment of the present application;
fig. 5 is a schematic flow chart illustrating an implementation of a method for calibrating a safety line according to an embodiment of the present application;
fig. 6 is a schematic diagram of a calibration frame according to an embodiment of the present application;
fig. 7 is a schematic diagram of an inspection device of a robot according to an embodiment of the present disclosure;
fig. 8 is a schematic view of a robot provided in an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
In order to explain the technical solution described in the present application, the following description will be given by way of specific examples.
Fig. 1 is a schematic view of an implementation flow of an inspection method of a robot according to an embodiment of the present application, which is detailed as follows:
in step S101, an image to be measured around the robot is captured by a camera.
Specifically, the robot in the embodiment of the present application may be a wheel type robot, a biped robot, or the like.
When gathering the image that awaits measuring, can gather through the camera that sets up at the robot direction of advance, also can be around through the robot, for example can carry out image acquisition including the camera that sets up of four sides in the front, back, left and right sides, can carry out 360 degrees control so to all around. For example, in the schematic diagram of the positional relationship between the robot and the camera shown in fig. 2, the robot body includes four side surfaces, which are a front side surface, a rear side surface, a left side surface and a right side surface, respectively, and two adjacent side surfaces are perpendicular to each other. The visual angle of any one camera can be larger than or equal to 90 degrees, so that images around the robot are collected through the four cameras mounted on the robot body, and visual angle blind areas around the robot are reduced.
In one implementation mode, the robot is a wheeled robot, the robot can move in the front and back directions quickly, and in a plurality of cameras arranged around the robot, the frequency of images collected by the cameras in the front and back directions can be greater than the frequency of images collected by the cameras arranged on the left side and the right side of the robot, so that the view of the robot in the motion direction can be collected more effectively in time, and the efficiency of safety early warning is improved.
In one implementation, in order to further reduce the blind area of the image collected by the robot, a plurality of cameras may be disposed at the same central position, for example, at the head position of the robot, and the tilt angles of the cameras may be adjusted according to the height of the robot, so that the area range of the collected image includes all areas around the robot.
In step S102, the position of the pedestrian included in the image to be measured is identified.
When the positions of the pedestrians in the image to be detected are identified, matching can be performed through the preset pedestrian characteristic image, whether the image to be detected comprises the pedestrians or not is judged, and then the area where the pedestrians are located is further determined.
Or, the trained pedestrian detection network model can be obtained by training the sample image in which the area where the pedestrian is located is calibrated to the pedestrian detection network model. And calculating the region of the pedestrian in the image to be detected according to the trained pedestrian detection network model, and determining the position of the pedestrian according to the region of the pedestrian.
The position of the pedestrian can be represented by the coordinates of two vertexes of the upper left corner and the lower right corner of the box corresponding to the area where the pedestrian is located. For example, the two vertices of the upper left corner and the lower right corner of the area where the pedestrian is located are (X1, Y1), (X2, Y2), (X1, Y1) representing the coordinates of the upper left corner of the pedestrian frame, and (X2, Y2) representing the coordinates of the lower right corner of the pedestrian, the location where the pedestrian is located can be represented as (X2, Y2).
In determining the pedestrian detection network model for detecting the area where the pedestrian is located, as shown in fig. 3, the method may include:
in step S301, a sample image including a pedestrian and a calibration area where the pedestrian is located in the sample image are acquired.
When the pedestrian detection network model is trained, a sample image in a task execution scene of the robot needs to be determined first. For example, when the robot is used in a campus setting, the image captured in the campus setting may be used as a sample image for training. In addition, in order to improve the effectiveness of the pedestrian detection network model, the sample images may include sample images acquired at different positions, different crowds, different weather and different time of a scene to be inspected, and the area where the pedestrian is located included in the sample images may be determined in a manual calibration manner, and for simplifying description, the area where the calibrated pedestrian is located is called a calibration area.
In a possible implementation manner, before the acquired sample image is obtained and calibrated, the sample image may be further preprocessed, including performing color space transformation and/or scale space transformation on the acquired sample image. For example, the acquired sample image is converted into a target color system, and the acquired sample image is compressed into an image of a predetermined size. For example, the collected sample image may be compressed to 224 × 224 pictures, and the pedestrian detection network model may be trained according to the same size image, thereby simplifying the training complexity.
Certainly, after the training is completed, when the image to be detected is identified, the image to be detected can be compressed into an image with a preset size, so that the area where the pedestrian is located in the image to be detected can be detected and identified.
In step S302, convolving the sample image with a first convolution kernel to obtain a first feature map;
after the training sample image is obtained, feature extraction can be performed on the training sample through a first convolution kernel in advance to obtain a first feature map corresponding to the first convolution kernel. When the size of the sample image or the preprocessed sample image is 224 × 224, the size of the first convolution kernel may be 3 × 3, and the first feature map is extracted by convolution of the first convolution kernel, so that the first feature map may be divided into two branches for processing.
In step S303, the first feature map is subjected to a second convolution kernel convolution to obtain a second feature map, and the second feature map is subjected to a third convolution kernel convolution to obtain a third feature map.
As shown in fig. 4, the pedestrian detection network model structure schematic diagram may include two processing branches, where the first feature map obtained by the convolution with the first convolution kernel includes: and (3) convolving the first feature map by a second convolution kernel to obtain a second feature map, and further convolving by a third convolution kernel to obtain a third feature map. Wherein the second convolution kernel size may be 1 × 1 and the third convolution kernel size may be 3 × 3.
In step S304, the first feature map is pooled to obtain a fourth feature map.
The second processing branch of the first characteristic diagram is as follows: and performing pooling treatment on the first feature map. In the pooling treatment, the range of the region of each pooling operation may be 2 x2, and after the pooling treatment, the fourth characteristic diagram may be obtained.
In step S305, the third feature map and the fourth feature map are fused, subjected to a fourth convolution kernel convolution and full connection to obtain an identification region of the pedestrian in the sample image, and parameters of the pedestrian detection network model are optimized according to a difference between the calibration region and the identification region until the difference meets a preset requirement.
And fusing the feature maps obtained by the two branches, namely a third feature map obtained by two times of convolution of the first branch and a fourth feature map obtained by pooling, and further obtaining the identification area of the pedestrian in the sample image by convolution processing of a fourth convolution kernel, full connection processing and the like.
And comparing the identification area where the pedestrian is located calculated by the behavior detection network model with the calibration area where the pedestrian is located calibrated in advance, and determining the difference between the identification area and the calibration area. If the difference between the pedestrian detection network model and the pedestrian detection network model does not meet the preset requirement, parameters in the pedestrian detection network model, including parameters in the first convolution kernel, the second convolution kernel, the third convolution kernel, the fourth convolution kernel and the like, can be further adjusted according to the difference until the difference between the identification area output by the pedestrian detection network model and the preset calibration area meets the preset requirement, so that the trained pedestrian detection network model is obtained.
Through the mode of fusing the characteristic diagrams obtained by the two branches, the training of a pedestrian detection model and the identification of the pedestrian region of the image to be detected can be completed more efficiently.
In step S103, when the identified pedestrian is located in a safety region calibrated in advance, prompt information is generated.
The setting of the safety area can be determined by the size of the safety distance. When the safety distance is larger, the range of the safety area is also larger.
When calibrating the safety area of the robot, the method may include, as shown in fig. 5:
in step S501, a calibration image including a safety line is acquired, and a distance between the safety line and the robot in the calibration image is a preset safety distance.
When calibrating the safety region of the robot, the robot can be arranged at a calibration position in advance, and safety lines can be drawn around the calibration position according to a preset safety distance and can be identified by drawing lines.
For example, when the safety distance of the robot is 1 meter, a circle with a radius of 1 meter may be drawn based on the calibration position of the robot as the center. And the machine shoots an image at the calibration position to obtain a calibration image comprising the safety line. Of course, the safe distance in the forward direction may be set to be greater than the safe distance in other directions according to the difference in the forward direction of the robot. For example, the safe distance in the forward direction of the robot is 2 meters, and the safe distance in other directions is 1 meter.
In step S502, calibrating a corresponding safety region in the image acquired by the camera according to the position of the safety line in the calibration image.
And when the camera is fixed on the wheeled robot, determining that a safety area is within the safety line according to the safety line included in the calibration image. According to the determined safety line, the pedestrian area of the image to be detected can be directly compared. As shown in fig. 6, if the pedestrian enters a preset safety area, that is, contacts a safety line of the safety area, the distance between the behavior and the robot is short, and a prompt message, for example, a prompt for reminding the pedestrian to pay attention to the safety, may be sent. The prompt information includes, but is not limited to, an audible prompt, an indicator light prompt, and the like. In addition, after the pedestrian enters a preset safe area, and the entering time is longer than a preset time, such as longer than 10 seconds, an alarm prompt, such as a whistle alarm and the like, can be sent out, and the face information of the pedestrian entering the safe area, the entering time of which is longer than the preset time, can be intercepted and stored. In addition, when the detected pedestrian enters the safe area, the image collected by the robot can be transmitted to the monitoring center, and the area where the pedestrian is located and the boundary line of the safe area in the collected image are marked by marks such as red lines, so that monitoring personnel can find problems in time.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Fig. 7 is a schematic diagram of an inspection device of a robot according to an embodiment of the present application, which is detailed as follows:
the inspection device of robot includes:
the to-be-detected image acquisition unit 701 is used for acquiring to-be-detected images around the robot through the camera;
an area identification unit 702, configured to identify a position of a pedestrian included in the image to be detected;
the prompting unit 703 is configured to generate prompting information when the identified pedestrian is located in a safety area calibrated in advance.
The inspection device of the robot corresponds to an inspection method of the robot.
Fig. 8 is a schematic view of a robot provided in an embodiment of the present application. As shown in fig. 8, the robot 8 of this embodiment includes: a processor 80, a memory 81 and a computer program 82, such as a robot tour inspection program, stored in the memory 81 and executable on the processor 80. The processor 80, when executing the computer program 82, implements the steps in the inspection method embodiments of the respective robots described above. Alternatively, the processor 80 implements the functions of the modules/units in the above-described device embodiments when executing the computer program 82.
Illustratively, the computer program 82 may be partitioned into one or more modules/units that are stored in the memory 81 and executed by the processor 80 to accomplish the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 82 in the robot 8. For example, the computer program 82 may be divided into:
the to-be-detected image acquisition unit is used for acquiring to-be-detected images around the robot through the camera;
the area identification unit is used for identifying the position of the pedestrian in the image to be detected;
and the prompting unit is used for generating prompting information when the identified pedestrian is positioned in a pre-calibrated safe area.
The robot may include, but is not limited to, a processor 80, a memory 81. Those skilled in the art will appreciate that fig. 8 is merely an example of a robot 8 and does not constitute a limitation of robot 8 and may include more or fewer components than shown, or some components in combination, or different components, e.g., the robot may also include input output devices, network access devices, buses, etc.
The Processor 80 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 81 may be an internal storage unit of the robot 8, such as a hard disk or a memory of the robot 8. The memory 81 may also be an external storage device of the robot 8, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the robot 8. Further, the memory 81 may also include both an internal storage unit and an external storage device of the robot 8. The memory 81 is used for storing the computer program and other programs and data required by the robot. The memory 81 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method of the embodiments described above can be realized by a computer program, which can be stored in a computer-readable storage medium and can realize the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain other components which may be suitably increased or decreased as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media which may not include electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. A robot inspection method is characterized by comprising the following steps:
acquiring an image to be detected around the robot through a camera;
identifying the position of a pedestrian in the image to be detected;
and when the identified pedestrian is located in a pre-calibrated safe area, generating prompt information.
2. The inspection method according to claim 1, wherein the step of identifying the position of the pedestrian included in the image to be inspected includes:
acquiring an acquired image, and inputting the acquired image into a trained pedestrian detection network model;
and calculating the area of the pedestrian in the image to be detected according to the trained pedestrian detection network model, and determining the position of the pedestrian according to the area of the pedestrian.
3. The inspection method for a robot according to claim 2, wherein prior to the steps of acquiring the captured images, inputting the captured images into a trained pedestrian detection network model, the method further comprises:
acquiring a sample image including a pedestrian and a calibration area where the pedestrian is located in the sample image;
performing convolution on the sample image through a first convolution kernel to obtain a first characteristic image;
performing second convolution kernel convolution on the first feature map to obtain a second feature map, and performing third convolution kernel convolution on the second feature map to obtain a third feature map;
pooling the first characteristic diagram to obtain a fourth characteristic diagram;
and fusing the third characteristic diagram and the fourth characteristic diagram, obtaining an identification area of the pedestrian in the sample image through fourth convolution kernel convolution and full connection, and optimizing parameters of the pedestrian detection network model according to the difference between the calibration area and the identification area until the difference meets the preset requirement.
4. The method for inspection of a robot according to claim 3, wherein the first convolution kernel and the third convolution kernel have a size of 3 x 3, and the second convolution Liao and the fourth convolution kernel have a size of 1 x 1.
5. The inspection method for a robot according to claim 1, wherein prior to the step of generating a prompt when the identified pedestrian is located in a pre-calibrated safe area, the method further comprises:
acquiring a calibration image comprising a safety line, wherein the distance between the safety line in the calibration image and the robot is a preset safety distance;
and calibrating a corresponding safety region in the image acquired by the camera according to the position of the safety line in the calibrated image.
6. The inspection method according to claim 1, wherein the step of generating the prompt message when the identified pedestrian is located in a pre-calibrated security area includes:
if the pedestrian entering the safe area is detected, generating a pedestrian entering prompt;
and/or generating a warning reminder if the pedestrian entering the safety area is detected to have a duration greater than a predetermined duration.
7. The inspection method according to claim 1, wherein the step of collecting the image to be measured around the robot through the camera includes:
collecting a plurality of groups of video streams through a camera group, wherein the camera group comprises cameras arranged at the front part, the rear part, the left part and the right part of the robot;
and analyzing to obtain the image to be detected according to the plurality of groups of collected video streams.
8. The utility model provides an inspection device of robot, its characterized in that, inspection device of robot includes:
the to-be-detected image acquisition unit is used for acquiring to-be-detected images around the robot through the camera;
the area identification unit is used for identifying the position of the pedestrian in the image to be detected;
and the prompting unit is used for generating prompting information when the identified pedestrian is positioned in a pre-calibrated safe area.
9. A robot comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor, when executing the computer program, carries out the steps of the inspection method of the robot according to any one of claims 1 to 7.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the inspection method of a robot according to any one of claims 1 to 7.
CN202010322489.XA 2020-04-22 2020-04-22 Robot and inspection method and device thereof Active CN111583336B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010322489.XA CN111583336B (en) 2020-04-22 2020-04-22 Robot and inspection method and device thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010322489.XA CN111583336B (en) 2020-04-22 2020-04-22 Robot and inspection method and device thereof

Publications (2)

Publication Number Publication Date
CN111583336A true CN111583336A (en) 2020-08-25
CN111583336B CN111583336B (en) 2023-12-01

Family

ID=72112519

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010322489.XA Active CN111583336B (en) 2020-04-22 2020-04-22 Robot and inspection method and device thereof

Country Status (1)

Country Link
CN (1) CN111583336B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114821987A (en) * 2021-01-18 2022-07-29 漳州立达信光电子科技有限公司 Reminding method and device and terminal equipment

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107636679A (en) * 2016-12-30 2018-01-26 深圳前海达闼云端智能科技有限公司 A kind of obstacle detection method and device
CN108780319A (en) * 2018-06-08 2018-11-09 珊口(深圳)智能科技有限公司 Oftware updating method, system, mobile robot and server
CN109176513A (en) * 2018-09-04 2019-01-11 北京华开领航科技有限责任公司 A kind of method for inspecting and cruising inspection system of intelligent inspection robot
CN109571468A (en) * 2018-11-27 2019-04-05 深圳市优必选科技有限公司 Security protection crusing robot and security protection method for inspecting
CN109664301A (en) * 2019-01-17 2019-04-23 中国石油大学(北京) Method for inspecting, device, equipment and computer readable storage medium
WO2019083291A1 (en) * 2017-10-25 2019-05-02 엘지전자 주식회사 Artificial intelligence moving robot which learns obstacles, and control method therefor
CN110228413A (en) * 2019-06-10 2019-09-13 吉林大学 Oversize vehicle avoids pedestrian from being involved in the safety pre-warning system under vehicle when turning
US20200033874A1 (en) * 2018-07-26 2020-01-30 Toyota Research Institute, Inc. Systems and methods for remote visual inspection of a closed space

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107636679A (en) * 2016-12-30 2018-01-26 深圳前海达闼云端智能科技有限公司 A kind of obstacle detection method and device
WO2019083291A1 (en) * 2017-10-25 2019-05-02 엘지전자 주식회사 Artificial intelligence moving robot which learns obstacles, and control method therefor
CN108780319A (en) * 2018-06-08 2018-11-09 珊口(深圳)智能科技有限公司 Oftware updating method, system, mobile robot and server
US20200033874A1 (en) * 2018-07-26 2020-01-30 Toyota Research Institute, Inc. Systems and methods for remote visual inspection of a closed space
CN109176513A (en) * 2018-09-04 2019-01-11 北京华开领航科技有限责任公司 A kind of method for inspecting and cruising inspection system of intelligent inspection robot
CN109571468A (en) * 2018-11-27 2019-04-05 深圳市优必选科技有限公司 Security protection crusing robot and security protection method for inspecting
CN109664301A (en) * 2019-01-17 2019-04-23 中国石油大学(北京) Method for inspecting, device, equipment and computer readable storage medium
CN110228413A (en) * 2019-06-10 2019-09-13 吉林大学 Oversize vehicle avoids pedestrian from being involved in the safety pre-warning system under vehicle when turning

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114821987A (en) * 2021-01-18 2022-07-29 漳州立达信光电子科技有限公司 Reminding method and device and terminal equipment
CN114821987B (en) * 2021-01-18 2024-04-30 漳州立达信光电子科技有限公司 Reminding method and device and terminal equipment

Also Published As

Publication number Publication date
CN111583336B (en) 2023-12-01

Similar Documents

Publication Publication Date Title
CN110660186B (en) Method and device for identifying target object in video image based on radar signal
JPH07250319A (en) Supervisory equipment around vehicle
CN115171361B (en) Dangerous behavior intelligent detection and early warning method based on computer vision
CN113744348A (en) Parameter calibration method and device and radar vision fusion detection equipment
CN113536935A (en) Safety monitoring method and equipment for engineering site
CN113408454A (en) Traffic target detection method and device, electronic equipment and detection system
CN111967396A (en) Processing method, device and equipment for obstacle detection and storage medium
CN113255444A (en) Training method of image recognition model, image recognition method and device
CN116863297A (en) Monitoring method, device, system, equipment and medium based on electronic fence
CN115995037A (en) Signal lamp state detection method, device, equipment, medium and product
CN117372979A (en) Road inspection method, device, electronic equipment and storage medium
CN111583336B (en) Robot and inspection method and device thereof
CN114821497A (en) Method, device and equipment for determining position of target object and storage medium
CN113945219A (en) Dynamic map generation method, system, readable storage medium and terminal equipment
CN114067287A (en) Foreign matter identification and early warning system based on vehicle side road side data perception fusion
CN116912328A (en) Calibration method and device of inverse perspective transformation matrix
CN114724119B (en) Lane line extraction method, lane line detection device, and storage medium
CN116486351A (en) Driving early warning method, device, equipment and storage medium
CN117522766A (en) Obstacle presenting method, apparatus, device, readable storage medium, and program product
CN115497242A (en) Intelligent monitoring system and monitoring method for foreign matter invasion in railway business line construction
CN112364693B (en) Binocular vision-based obstacle recognition method, device, equipment and storage medium
CN114092857A (en) Gateway-based collection card image acquisition method, system, equipment and storage medium
CN114742726A (en) Blind area detection method and device, electronic equipment and storage medium
CN113869407A (en) Monocular vision-based vehicle length measuring method and device
CN111539279A (en) Road height limit height detection method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant