CN110955243B - Travel control method, apparatus, device, readable storage medium, and mobile apparatus - Google Patents

Travel control method, apparatus, device, readable storage medium, and mobile apparatus Download PDF

Info

Publication number
CN110955243B
CN110955243B CN201911192556.4A CN201911192556A CN110955243B CN 110955243 B CN110955243 B CN 110955243B CN 201911192556 A CN201911192556 A CN 201911192556A CN 110955243 B CN110955243 B CN 110955243B
Authority
CN
China
Prior art keywords
image
target object
gesture
preset
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911192556.4A
Other languages
Chinese (zh)
Other versions
CN110955243A (en
Inventor
刘忠刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Neolix Technologies Co Ltd
Original Assignee
Neolix Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Neolix Technologies Co Ltd filed Critical Neolix Technologies Co Ltd
Priority to CN201911192556.4A priority Critical patent/CN110955243B/en
Publication of CN110955243A publication Critical patent/CN110955243A/en
Application granted granted Critical
Publication of CN110955243B publication Critical patent/CN110955243B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0253Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting relative motion information from a plurality of images taken successively, e.g. visual odometry, optical flow
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0221Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0223Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Electromagnetism (AREA)
  • Image Analysis (AREA)

Abstract

According to the travel control method, the device, the equipment, the readable storage medium and the mobile device, in the travel process of the mobile device, the image is acquired by using the image pickup equipment and analyzed, so that whether a target object exists in the image or not and whether the gesture of the target object is a preset gesture or not is detected. If the target object exists in the acquired image and the gesture of the target object is a preset gesture, the power equipment can control the mobile device to stop advancing. In the scheme, through the mode of analyzing and processing the acquired image, when the target object is determined and the gesture of the target object is the preset gesture, the mobile device can be automatically controlled to stop, so that the problem that the existing mobile device can only stay at a fixed place and is difficult to well meet the instant requirement of a user on the mobile device is avoided.

Description

Travel control method, apparatus, device, readable storage medium, and mobile apparatus
Technical Field
The present application relates to the field of automatic control technology, and in particular, to a travel control method, apparatus, device, readable storage medium, and mobile apparatus.
Background
When the unmanned retail vehicle cruises and sells in the scenes of parks, communities or scenic spots and the like, the whole cruising and selling process is unmanned, and the unmanned retail vehicle can park and sell at a set specific site. In the existing cruising and selling mode, the retail vehicle can only be fixed at a preset specific site for parking and selling, but in the running process of the retail vehicle, a customer may also have a demand for purchasing goods. Therefore, the existing cruise mode of the retail vehicle has difficulty in well satisfying the user's demand.
Disclosure of Invention
The object of the present application includes, for example, providing a travel control method, apparatus, device, readable storage medium, and mobile device capable of well satisfying the instant demands of users by means of image recognition.
Embodiments of the application may be implemented as follows:
in a first aspect, an embodiment provides a travel control method applied to a control apparatus in a mobile device, the mobile device further including an image pickup apparatus and a power apparatus respectively connected to the control apparatus, the method including:
acquiring an image acquired by the camera equipment in the travelling process of the mobile device;
Detecting whether a target object exists in the image and whether the gesture of the target object is a preset gesture;
and if the target object exists in the image and the gesture of the target object is the preset gesture, sending an indication message to the power equipment, wherein the indication message is used for indicating the power equipment to control the mobile device to stop travelling.
In an optional embodiment, the step of detecting whether a target object exists in the image and whether the gesture of the target object is a preset gesture includes:
determining a region of interest image in the image according to a predetermined rule;
importing the region of interest image into a pre-established and trained recognition model for recognition, and determining whether the target object exists in the region of interest image according to the obtained recognition result;
if the target object exists in the region-of-interest image, judging whether the gesture of the target object is a preset gesture according to a preset judging rule.
In an alternative embodiment, the image comprises a succession of multiple frames of images, and the step of determining the region of interest image in the images according to a predetermined rule comprises:
Aiming at the last frame of image in the multi-frame images, obtaining a preliminary object contained in the last frame of image, wherein the preliminary object is an object existing in each frame of image of the multi-frame images;
acquiring a preliminary object in a set area in the last frame of image;
and determining an interested area image in the image according to the area where the preliminary object in the set area is located.
In an optional embodiment, the step of importing the region of interest image into a pre-established and trained recognition model to perform recognition, and determining whether the target object exists in the region of interest image according to the obtained recognition result includes:
dividing the region of interest image into a plurality of strip-shaped region images in the vertical direction;
and respectively importing each strip-shaped area image into a pre-established and trained recognition model for recognition, and determining whether the target object exists in each strip-shaped area image according to the obtained recognition result.
In an optional embodiment, the step of detecting whether a target object exists in the image and whether the gesture of the target object is a preset gesture further includes:
According to the size of the target object in the image, calculating according to a preset conversion algorithm to obtain the real size of the target object;
and detecting whether the real size is in a preset range, and if so, executing the step of judging whether the gesture of the target object is a preset gesture according to a preset judging rule.
In an optional embodiment, the step of determining whether the gesture of the target object is a preset gesture according to a preset determination rule includes:
the target object is imported into a pre-established classifier for classification and discrimination, wherein the classifier is obtained by training based on a plurality of positive sample images and a plurality of negative sample images in advance, the positive sample images comprise training objects with the preset postures, and the negative sample images comprise training objects with postures other than the preset postures;
and judging whether the gesture of the target object is a preset gesture according to the classification judging result of the classifier.
In an optional embodiment, the target object is a human body image, and the step of determining whether the gesture of the target object is a preset gesture according to a preset discrimination rule includes:
Obtaining skeletal key points of a hand region of the human body image;
obtaining a relative angle of skeletal keypoints of the hand region with respect to a torso of the human body;
and detecting whether the relative angle is in a preset angle range, and if so, judging that the gesture of the target object is a preset gesture.
In an alternative embodiment, the method further comprises:
according to the depth information of the target object in the image, calculating to obtain the distance between the target object and the mobile device;
calculating to obtain a travel time length according to the distance and the travel speed of the mobile device;
and timing according to the travelling time length, and executing the step of sending the indication message to the power equipment after the timing is finished.
In a second aspect, an embodiment provides a travel control apparatus applied to a control device in a mobile apparatus, the mobile apparatus further including an image pickup device and a power device respectively connected to the control device, the apparatus including:
the image acquisition module is used for acquiring the image acquired by the camera equipment in the advancing process of the mobile device;
the detection module is used for detecting whether a target object exists in the image and whether the gesture of the target object is a preset gesture;
The sending module is used for sending an indication message to the power equipment when the target object exists in the image and the gesture of the target object is the preset gesture, and the indication message is used for indicating the power equipment to control the mobile device to stop advancing.
In a third aspect, an embodiment provides a control device comprising one or more storage media and one or more processors in communication with the storage media, the one or more storage media storing processor-executable machine-executable instructions that, when executed by an electronic device, the processor executes the machine-executable instructions to perform the travel control method of any of the preceding embodiments.
In a fourth aspect, embodiments provide a machine-readable storage medium storing machine-executable instructions that when executed implement the travel control method of any of the preceding embodiments.
In a fifth aspect, an embodiment provides a mobile device including an image capturing apparatus, a power apparatus, and a control apparatus according to the foregoing embodiment, the control apparatus being connected to the image capturing apparatus and the power apparatus.
The beneficial effects of the embodiment of the application include, for example:
according to the travel control method, the device, the equipment, the readable storage medium and the mobile device, in the travel process of the mobile device, the image is acquired by using the image pickup equipment and analyzed, so that whether a target object exists in the image or not and whether the gesture of the target object is a preset gesture or not is detected. If the target object exists in the acquired image and the gesture of the target object is a preset gesture, the power equipment can control the mobile device to stop advancing. In the scheme, through the mode of analyzing and processing the acquired image, when the target object is determined and the gesture of the target object is the preset gesture, the mobile device can be automatically controlled to stop, so that the problem that the existing mobile device can only stay at a fixed place and is difficult to well meet the instant requirement of a user on the mobile device is avoided.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a block diagram of a mobile device according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of a control device according to an embodiment of the present application;
FIG. 3 is a flow chart of a travel control method according to an embodiment of the present application;
FIG. 4 is a flowchart showing a sub-step of step S320 in FIG. 3;
FIG. 5 is a flowchart of a gesture determination method according to an embodiment of the present application;
FIG. 6 is another flowchart of a gesture determination method according to an embodiment of the present application;
FIG. 7 is a schematic diagram of skeletal key points of a human body image provided by an embodiment of the present application;
FIG. 8 is another flow chart of a travel control method according to an embodiment of the present application;
fig. 9 is a functional block diagram of a travel control device according to an embodiment of the present application.
Icon: 10-a control device; 110-a processor; 120-memory; 130-a communication unit; 140-travel control means; 141-an image acquisition module; 142-a detection module; 143-a transmitting module; a 20-camera device; 30-power plant.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments of the present application. The components of the embodiments of the present application generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the application, as presented in the figures, is not intended to limit the scope of the application, as claimed, but is merely representative of selected embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures.
Furthermore, the terms "first," "second," and the like, if any, are used merely for distinguishing between descriptions and not for indicating or implying a relative importance.
It should be noted that the features of the embodiments of the present application may be combined with each other without conflict.
Referring to fig. 1, an embodiment of the present application provides a mobile device, which includes a power apparatus 30, an image capturing apparatus 20, and a control apparatus 10, wherein the control apparatus 10 may be connected to the power apparatus 30 and the image capturing apparatus 20, for example, electrically connected or wirelessly connected, so as to implement information and data interaction with the power apparatus 30 and the image capturing apparatus 20.
In this embodiment, the mobile device may be a movable device such as an unmanned vehicle, a robot, or the like, and the unmanned vehicle may be, for example, an unmanned wagon, an unmanned bus, or the like.
The image pickup apparatus 20 may be provided on the vehicle body front side of the mobile device for collecting image information in a range in front of the mobile device. The control apparatus 10 may be installed inside the mobile device, and the control apparatus 10 may acquire an image acquired by the image capturing apparatus 20, perform analysis processing on the image, and issue a command to the power apparatus 30 accordingly according to a result of the analysis processing. The power plant 30 may control the travel state of the mobile device according to the received instructions.
In addition, in this embodiment, when the mobile device is an unmanned vending vehicle, the mobile device may further include a storage rack, a delivery frame, a touch screen, and other devices disposed on the vehicle body, so as to store the object, and implement a vending process when the user performs a purchase operation.
Referring to fig. 2, a block diagram of the control apparatus 10 shown in fig. 1 according to an embodiment of the present application is provided, where the control apparatus 10 includes a travel control device 140, a processor 110, a memory 120, and a communication unit 130. The memory 120, the processor 110, and the communication unit 130 are directly or indirectly electrically connected to each other to realize data transmission or interaction. The travel control means 140 comprises at least one software functional module which may be stored in the memory 120 in the form of software or firmware or which is solidified in the operating system of the control device 10. The processor 110 is configured to execute executable modules stored in the memory 120, such as software functional modules or computer programs included in the travel control device 140, to implement a travel control method.
Referring to fig. 3, a flowchart of a travel control method applied to the control device 10 according to an embodiment of the present application is shown. It should be noted that, the travel control method provided by the present application is not limited by the specific sequence shown in fig. 3 and described below. The steps shown in fig. 3 will be described in detail below.
Step S310, during the traveling of the mobile device, acquiring an image acquired by the image capturing apparatus 20.
Step S320, detecting whether a target object exists in the image and whether the gesture of the target object is a preset gesture, and executing the following step S220 when the target object exists in the image and the gesture of the target object is the preset gesture, otherwise, returning to step S310.
Step S330, sending an indication message to the power equipment 30, where the indication message is used to instruct the power equipment 30 to control the mobile device to stop traveling.
In this embodiment, the mobile device travels along a predetermined route, for example, an unmanned wagon, and may travel along a predetermined route in a scene such as a park, and when traveling to each predetermined point of sale, the mobile device parks at the point of sale and performs vending operations.
In order to facilitate the user to be able to bring the mobile device to a stop at any time and any place, so that purchase of items can also be made at points of sale that are not preset. In the present embodiment, the image pickup apparatus 20 provided on the mobile device can perform image acquisition during traveling of the mobile device. The image captured by the image capturing apparatus 20 is an image in the forward area in the traveling direction of the mobile device. And the mobile device can be further provided with a voice playing device, and voice information of ' in the process of cruising and selling the unmanned vending vehicle ', for example, the user can ask the parked vehicle to purchase goods ' is played through the voice playing device so as to inform surrounding users that the parked vehicle can be called.
The control apparatus 10 may acquire the image acquired by the image capturing apparatus 20, and the image capturing apparatus 20 may send the acquired image to the control apparatus 10 in real time, or may send the acquired image to the control apparatus 10 every predetermined time period, for example, one minute or two minutes, etc., which is not particularly limited in this embodiment.
After obtaining the image transmitted by the image capturing apparatus 20, the control apparatus 10 analyzes the obtained image, and can detect whether or not a target object, which may be a human body image, is present in the image, that is, whether or not there is a human body image in the acquired image. If a human body image exists, whether the gesture of the human body image is a preset gesture or not is detected, wherein the preset gesture is a preset gesture for triggering the mobile device to stop. I.e., when it is detected that the human body is present in front of the mobile device in the preset posture, the control apparatus 10 transmits an indication message to the power apparatus 30 to control the mobile device to stop traveling through the power apparatus 30.
Thus, when the user wishes to stop the mobile device during the traveling process of the mobile device, such as purchasing goods, the user can put the preset gesture. When the control device 10 analyzes the image and recognizes that there is a user whose motion gesture is the preset gesture, the power device 30 may be triggered to control the mobile device to stop. Therefore, the instant requirement of the user on the mobile device can be met through an automatic identification and control mode.
In this embodiment, in order to accurately identify whether a target object exists in an image and identify whether the gesture of the target object is a preset gesture, referring to fig. 4, this step may be implemented by:
step S321, determining an image of a region of interest in the image according to a predetermined rule.
Step S322, importing the region-of-interest image into a pre-established and trained recognition model for recognition, and determining whether the target object exists in the region-of-interest image according to the obtained recognition result.
Step S323, determining whether the gesture of the target object is a preset gesture according to a preset determination rule when the target object exists in the region of interest image.
In this embodiment, in order to eliminate interference factors in the acquired image and reduce the workload of image processing, an image of a region of interest in the image may be captured, and image recognition may be performed based on the image of the region of interest. The region of interest image is an image containing a target object, which is intercepted after the image is analyzed and processed.
The acquired images for object recognition can be continuous multi-frame images, and the multi-frame images are analyzed to determine the region of interest image in the images. In this embodiment, since the image acquired in the angle range generally has distortion in consideration of the shooting angle of the single-path camera being about 128 degrees, the distortion processing can be performed on each acquired frame image in advance to eliminate the distortion phenomenon existing at the edge of each frame image.
In order to reduce the image processing amount, downsampling may be performed on each frame of image, and then, for the last frame of image in the multiple frames of images, a preliminary object included in the last frame of image is obtained, where the preliminary object is an object existing in each frame of image in the multiple frames of images. The object appearing in each frame of the multi-frame image is obtained, so that the interference of the object appearing in a short time on the identification result is avoided, and the stability of the identification result is improved.
The objects may be people, trees, houses, etc., and in order to further filter out some interference factors, the preliminary object in the set area in the last frame image may be obtained based on the last frame image. In this case, the setting area may be a middle area of the image or a position near the middle area, excluding a position such as an air area in the image, in consideration that the human body is generally present in the middle of the road or beside the road, for example, and the acquisition view angle at which the image is acquired by the image pickup device 20 is generally acquired from the road.
And determining an interested region image in the image according to the region where the preliminary object in the set region is located. For example, when the number of the preliminary objects in the set region is plural, the entire region where the plural preliminary objects are located may be taken as the region of interest. Thus, the probability that the obtained preliminary object in the region of interest is a human body can be improved.
After determining the region of interest image, the region of interest image can be identified by a pre-established and trained identification model, wherein the identification model can be obtained by training based on a neural network model in advance. Alternatively, a neural network model is previously established, and the neural network model may be a convolutional neural network model, which is not particularly limited. And obtaining a plurality of sample images containing the target object. Training the neural network model by using the sample image to obtain an identification model meeting the precision requirement.
In this embodiment, in order to improve accuracy of object recognition, an upsampling process may be performed on the region of interest image, and a subsequent processing operation may be performed based on the upsampled region of interest image.
In the target object recognition using the recognition model, in order to reduce the processing amount of the image at the time of single recognition and in consideration of the possibility of the presence of a plurality of target objects in the image of interest, in order to avoid recognition interference between the target objects, the region-of-interest image may be divided into a plurality of strip-shaped region images in the vertical direction. And respectively importing each strip-shaped area image into a pre-established and trained recognition model for recognition, and determining whether a target object exists in each strip-shaped area image according to the obtained recognition result.
In this embodiment, the region of interest image is divided in the vertical direction, which considers that the human body appears in the image in the vertical direction, so that in the case that there are a plurality of target objects, it is convenient to divide each target object.
In addition, in the present embodiment, in order to avoid a phenomenon such that a single target object is divided into different bar area images or a plurality of target objects are divided into the same bar area image due to the inadequacy of the division accuracy in dividing the bar area images, the recognition result is affected. Thus, in practice, the region of interest image may be divided multiple times, each time with different precision. And then a plurality of strip-shaped area images obtained by dividing each time are imported into a recognition model for recognition. And finally, selecting a better recognition result.
In this embodiment, if it is determined that the target object exists in the image through the above steps, that is, a human body exists in the image. Considering that in an actual application scenario, there may be some interference caused by human bodies in, for example, a humanoid board, a billboard, etc., the travel control method provided in this embodiment further includes the following steps to exclude such interference:
according to the size of the target object in the image, the real size of the target object can be obtained through calculation according to a preset conversion algorithm, whether the obtained real size is in a preset range or not is detected, if the obtained real size is in the preset range, the target object is indicated to be the real human body most likely, and the subsequent recognition of the preset gesture can be performed.
After the real scene is presented to the image, the size of the target object in the image is obtained based on the image coordinate system, there is a conversion of the scale between the image coordinate system and the real physical coordinate system, and the conversion relation can be obtained in advance and stored in the control apparatus 10. The preset conversion algorithm can be a method for converting according to the conversion relation between the pre-stored scales.
In this embodiment, the size may be a vertical size, and the user of the mobile device may be an adult user, and typically, the height of the adult user may be within a certain range, for example, between five and two meters. While the human body in, for example, a humanoid board or billboard, is generally small, its size is generally not within this preset range. Therefore, the interference of the non-real human body can be eliminated by detecting whether the real size of the target object is within the preset range.
After the fact that the real human body exists in the image is determined in the mode, whether the gesture of the target object is a preset gesture which is preset and used for triggering and controlling the mobile device to stop can be judged according to the preset judging rule.
Referring to fig. 5, in this embodiment, as a possible implementation manner, the gesture recognition may be implemented by the following steps:
step S3231, the target object is led into a pre-established classifier for classification and discrimination, wherein the classifier is obtained by training based on a plurality of positive sample images and a plurality of negative sample images in advance, the positive sample images comprise training objects with the postures being the preset postures, and the negative sample images comprise training objects with the postures being other postures except the preset postures.
Step S3232, judging whether the gesture of the target object is a preset gesture according to the classification and discrimination result of the classifier.
In this embodiment, training may be performed in advance based on a support vector machine to establish and obtain the classifier, and the established support vector machine may be trained using a plurality of positive sample images including training objects having postures of the preset postures and a plurality of negative sample images including training objects having postures of other than the preset postures. The training object is a human body image, and the preset gesture may be set according to requirements, for example, two-hand flat lifting, one-hand bending, and the like, which is not limited in this embodiment. The classifier obtained through training can realize the judgment of preset gestures and non-preset gestures.
The gesture of the target object is identified through the classifier, and whether the gesture of the target object is a preset gesture or not can be determined according to the output result of the classifier, for example, when the output result of the classifier is 0, the gesture of the target object is indicated to be not the preset gesture, and when the output result of the classifier is 1, the gesture of the target object is indicated to be the preset gesture.
In addition, referring to fig. 6, in this embodiment, as another possible implementation manner, the gesture recognition may be further implemented by:
step S2333, obtaining skeleton key points of the hand area of the human body image;
step S2334, obtaining the relative angle of the skeletal key points of the hand region relative to the torso of the human body;
step S2335, detect whether the relative angle is in the preset angle range, if yes, then execute the following step S2336, otherwise execute the following step S2337.
Step S2336, determining the gesture of the target object as a preset gesture.
Step S2337, determining that the gesture of the target object is not the preset gesture.
In this embodiment, as can be seen from the above, the target object is a human body image, and whether the posture of the human body is the preset posture can be determined by identifying the key points of the skeleton of the human body. The preset posture may be set as the posture of the arm in consideration of convenience of implementation.
The human body image obtained by recognition includes the four limbs of the human body and the trunk of the human body, and therefore, based on the trunk of the human body, whether the posture thereof is the preset posture can be determined by detecting the state of the hand area relative to the trunk.
Referring to fig. 7 in combination, skeletal keypoints of a hand region of a human body image may be obtained, including, for example, shoulder keypoints, elbow keypoints, wrist keypoints, and the like. And then obtaining the relative angle of the skeletal key points of the hand area relative to the human trunk. For example, the obtained shoulder, elbow and wrist keypoints may be sequentially connected to obtain the relative angle between the connection and the torso.
If the relative angle is within the preset angle range, the posture thereof can be determined as the preset posture. For example, if the preset posture is arm flat, the relative angle between the connecting line and the trunk should be about 90 degrees, so if the relative angle between the connecting line and the trunk is about 90 degrees, the posture thereof can be determined as the preset posture.
It should be noted that, the foregoing merely illustrates the preset gesture, and in implementation, different preset gesture forms may be set according to the requirement, which is not particularly limited in this embodiment.
If the gesture of the target object is determined to be the preset gesture through the steps, the requirement that the user hopes to be able to stop the mobile device is indicated. The control device 10 may send an indication message to the power device 30 to control the mobile apparatus to stop traveling by the power device 30.
In this embodiment, considering that there is a certain distance between the mobile device and the target object when it is determined that the target object needs to stop the mobile device, in order to provide convenience for the client, referring to fig. 8, the travel control method provided in this embodiment further includes the following steps:
step S810, calculating a distance between the target object and the mobile device according to the depth information of the target object in the image.
Step S820, calculating a travel duration according to the distance and the travel speed of the mobile device.
Step S830, counting time according to the travel time length, and executing the step of sending an indication message to the power equipment 30 after the counting time is ended.
In this embodiment, the depth information of the target object in the image may be obtained by a depth recognition algorithm, where the depth recognition algorithm may be, for example, a recognition method based on focusing information, a recognition algorithm based on defocus information, or a recognition method based on brightness change, which are commonly used at present, and is not limited in this embodiment.
The distance between the target object and the mobile device can be calculated according to the depth information of the target object in the image, and the time length of the moving to the target object can be calculated by combining the moving speed of the mobile device. The above-described travel time period may be a time period of traveling just to the target object, or a time period of traveling to a position distant from the target object by a preset interval, which may be, for example, one meter or two meters, or the like.
As such, the control device 10 may time the obtained travel time length and send an indication message to the power device 30 after the time is finished, for instructing the power device 30 to control the mobile apparatus to stop traveling.
In this embodiment, when the mobile device is an unmanned vending vehicle, after the unmanned vending vehicle stops traveling, the user may utilize the touch screen on the unmanned vending vehicle to perform commodity purchasing, and after the user successfully completes payment, the unmanned vending vehicle puts in corresponding commodities so as to complete the purchasing of the user.
After the user purchase is finished, if the continuous set time is detected and the purchase operation of the user is not detected, the user is indicated that the user has no purchase demand at the moment, and the unmanned wagon can continue to travel according to the preset route.
In this embodiment, during the traveling of the mobile device, by analyzing the image acquired by the image capturing apparatus 20 on the mobile device to determine whether there is a target object in the front range of the mobile device and whether the posture of the target object is a preset posture, if there is a target object and the posture of the target object is a preset posture, it can be determined that there is a user hope that the mobile device stops traveling at this time. Further, the mobile device travel may be controlled by the power plant 30 accordingly. Therefore, the mobile device can be automatically controlled to stop in an image recognition mode, and the problem that the existing mobile device can only stay at a fixed place and is difficult to well meet the instant requirement of a user on the mobile device is solved.
Referring to fig. 9, in order to perform the corresponding steps in the foregoing embodiments and various possible manners, an implementation manner of the travel control device 140 is given below, and alternatively, the travel control device 140 may employ the device structure of the control apparatus 10 shown in fig. 2. Further, fig. 9 is a functional block diagram of a travel control device 140 according to an embodiment of the present application. It should be noted that, the basic principle and the technical effects of the travel control device 140 provided in this embodiment are the same as those of the foregoing embodiment, and for brevity, reference should be made to the corresponding content in the foregoing embodiment. The travel control device 140 includes an image acquisition module 141, a detection module 142, and a transmission module 143.
An image acquisition module 141, configured to acquire an image acquired by the image capturing apparatus 20 during a traveling process of the mobile device. It will be appreciated that the image acquisition module 141 may be configured to perform step S310 described above, and reference may be made to the details of implementation of the image acquisition module 141 regarding step S310 described above.
The detecting module 142 is configured to detect whether a target object exists in the image and whether a gesture of the target object is a preset gesture. It will be appreciated that the detection module 142 may be used to perform step S320 described above, and reference may be made to the details of the implementation of the detection module 142 described above with respect to step S320.
And a sending module 143, configured to send an indication message to the power equipment 30 when the target object exists in the image and the gesture of the target object is the preset gesture, where the indication message is used to instruct the power equipment 30 to control the mobile device to stop travelling. It is understood that the transmitting module 143 may be used to perform the above step S330, and reference may be made to the above description of the step S330 for the detailed implementation of the transmitting module 143.
In one possible implementation, the detection module 142 may be configured to perform the detection operation by:
Determining a region of interest image in the image according to a predetermined rule;
importing the region of interest image into a pre-established and trained recognition model for recognition, and determining whether the target object exists in the region of interest image according to the obtained recognition result;
if the target object exists in the region-of-interest image, judging whether the gesture of the target object is a preset gesture according to a preset judging rule.
In this embodiment, the acquired images include consecutive multi-frame images, and the detection module 142 may determine the region of interest image by:
aiming at the last frame of image in the multi-frame images, obtaining a preliminary object contained in the last frame of image, wherein the preliminary object is an object existing in each frame of image of the multi-frame images;
acquiring a preliminary object in a set area in the last frame of image;
and determining an interested area image in the image according to the area where the preliminary object in the set area is located.
In one possible implementation, the detection module 142 may be used to perform the identification of the target object by:
Dividing the region of interest image into a plurality of strip-shaped region images in the vertical direction;
and respectively importing each strip-shaped area image into a pre-established and trained recognition model for recognition, and determining whether the target object exists in each strip-shaped area image according to the obtained recognition result.
In this embodiment, the detection module 142 may be further configured to:
according to the size of the target object in the image, calculating according to a preset conversion algorithm to obtain the real size of the target object;
and detecting whether the real size is in a preset range, and if so, executing the step of judging whether the gesture of the target object is a preset gesture according to a preset judging rule.
In this embodiment, as a possible implementation manner, the detection module 142 may perform the gesture determination by:
the target object is imported into a pre-established classifier for classification and discrimination, wherein the classifier is obtained by training based on a plurality of positive sample images and a plurality of negative sample images in advance, the positive sample images comprise training objects with the preset postures, and the negative sample images comprise training objects with postures other than the preset postures;
And judging whether the gesture of the target object is a preset gesture according to the classification judging result of the classifier.
As another possible implementation, the detection module 142 may also make the determination of the pose by:
obtaining skeletal key points of a hand region of the human body image;
obtaining a relative angle of skeletal keypoints of the hand region with respect to a torso of the human body;
and detecting whether the relative angle is in a preset angle range, and if so, judging that the gesture of the target object is a preset gesture.
In this embodiment, the travel control device 140 may further include a control module, which may be used to:
according to the depth information of the target object in the image, calculating to obtain the distance between the target object and the mobile device;
calculating to obtain a travel time length according to the distance and the travel speed of the mobile device;
and (3) counting according to the travel time length, and triggering the sending module 143 to send an indication message to the power equipment 30 after the counting is finished.
The travel control device 140 provided by the embodiment of the present application can execute the travel control method provided by any embodiment of the present application, and has the corresponding functional modules and beneficial effects of the execution method.
Alternatively, the above modules may be stored in the memory 120 shown in fig. 2 in the form of software or Firmware (Firmware) or solidified in an Operating System (OS) of the control device 10, and may be executed by the processor 110 in fig. 2. Meanwhile, data, codes of programs, and the like, which are required to execute the above-described modules, may be stored in the memory 120.
Embodiments of the present application also provide a machine-readable storage medium containing machine-executable instructions, which when executed by a computer processor, are operative to perform the associated operations of the travel control method provided by any of the embodiments of the present application.
In summary, according to the travel control method, the apparatus, the device, the readable storage medium and the mobile device provided by the embodiments of the present application, during the travel process of the mobile device, the image capturing apparatus 20 is used to capture an image, and analyze the image to detect whether a target object exists in the image and whether the gesture of the target object is a preset gesture. If the target object exists in the acquired image and the gesture of the target object is a preset gesture, the power equipment 30 can control the mobile device to stop travelling. In the scheme, through the mode of analyzing and processing the acquired image, when the target object is determined and the gesture of the target object is the preset gesture, the mobile device can be automatically controlled to stop, so that the problem that the existing mobile device can only stay at a fixed place and is difficult to well meet the instant requirement of a user on the mobile device is avoided.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any changes or substitutions easily contemplated by those skilled in the art within the scope of the present application should be included in the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (9)

1. A travel control method, characterized by being applied to a control apparatus in a mobile device, the mobile device further including an image pickup apparatus and a power apparatus respectively connected to the control apparatus, the method comprising:
acquiring an image acquired by the camera equipment in the travelling process of the mobile device;
detecting whether a target object exists in the image and whether the gesture of the target object is a preset gesture;
if the target object exists in the image and the gesture of the target object is the preset gesture, sending an indication message to the power equipment, wherein the indication message is used for indicating the power equipment to control the mobile device to stop advancing; the step of detecting whether a target object exists in the image and whether the gesture of the target object is a preset gesture comprises the following steps:
Determining a region of interest image in the image according to a predetermined rule;
importing the region of interest image into a pre-established and trained recognition model for recognition, and determining whether the target object exists in the region of interest image according to the obtained recognition result;
if the target object exists in the region-of-interest image, judging whether the gesture of the target object is a preset gesture according to a preset judging rule; the image comprises a succession of multi-frame images, and the step of determining the image of the region of interest in the images according to a predetermined rule comprises:
aiming at the last frame of image in the multi-frame images, obtaining a preliminary object contained in the last frame of image, wherein the preliminary object is an object existing in each frame of image of the multi-frame images;
acquiring a preliminary object in a set area in the last frame of image;
determining an interested region image in the image according to the region where the preliminary object in the set region is located;
the step of detecting whether a target object exists in the image and whether the gesture of the target object is a preset gesture further includes:
According to the size of the target object in the image, calculating according to a preset conversion algorithm to obtain the real size of the target object;
and detecting whether the real size is in a preset range, and if so, executing the step of judging whether the gesture of the target object is a preset gesture according to a preset judging rule.
2. The travel control method according to claim 1, wherein the step of importing the region-of-interest image into a recognition model which is built and trained in advance to recognize, and determining whether the target object exists in the region-of-interest image based on the obtained recognition result, comprises:
dividing the region of interest image into a plurality of strip-shaped region images in the vertical direction;
and respectively importing each strip-shaped area image into a pre-established and trained recognition model for recognition, and determining whether the target object exists in each strip-shaped area image according to the obtained recognition result.
3. The travel control method according to claim 1, wherein the step of judging whether the posture of the target object is a preset posture according to a preset discrimination rule includes:
The target object is imported into a pre-established classifier for classification and discrimination, wherein the classifier is obtained by training based on a plurality of positive sample images and a plurality of negative sample images in advance, the positive sample images comprise training objects with the preset postures, and the negative sample images comprise training objects with postures other than the preset postures;
and judging whether the gesture of the target object is a preset gesture according to the classification judging result of the classifier.
4. The travel control method according to claim 1, wherein the target object is a human body image, and the step of judging whether the posture of the target object is a preset posture according to a preset judgment rule includes:
obtaining skeletal key points of a hand region of the human body image;
obtaining a relative angle of skeletal keypoints of the hand region with respect to a torso of the human body;
and detecting whether the relative angle is in a preset angle range, and if so, judging that the gesture of the target object is a preset gesture.
5. The travel control method according to any one of claims 1 to 4, characterized in that the method further comprises:
According to the depth information of the target object in the image, calculating to obtain the distance between the target object and the mobile device;
calculating to obtain a travel time length according to the distance and the travel speed of the mobile device;
and timing according to the travelling time length, and executing the step of sending the indication message to the power equipment after the timing is finished.
6. A travel control apparatus characterized by being applied to a control device in a mobile device, the mobile device further including an image pickup device and a power device respectively connected to the control device, the apparatus comprising:
the image acquisition module is used for acquiring the image acquired by the camera equipment in the advancing process of the mobile device;
the detection module is used for detecting whether a target object exists in the image and whether the gesture of the target object is a preset gesture;
the sending module is used for sending an indication message to the power equipment when the target object exists in the image and the gesture of the target object is the preset gesture, wherein the indication message is used for indicating the power equipment to control the mobile device to stop travelling;
the detection module is used for: determining a region of interest image in the image according to a predetermined rule;
Importing the region of interest image into a pre-established and trained recognition model for recognition, and determining whether the target object exists in the region of interest image according to the obtained recognition result;
if the target object exists in the region-of-interest image, judging whether the gesture of the target object is a preset gesture according to a preset judging rule;
the image comprises a succession of multi-frame images, and the step of determining the image of the region of interest in the images according to a predetermined rule comprises:
aiming at the last frame of image in the multi-frame images, obtaining a preliminary object contained in the last frame of image, wherein the preliminary object is an object existing in each frame of image of the multi-frame images;
acquiring a preliminary object in a set area in the last frame of image;
determining an interested region image in the image according to the region where the preliminary object in the set region is located;
the detection module is used for:
according to the size of the target object in the image, calculating according to a preset conversion algorithm to obtain the real size of the target object;
and detecting whether the real size is in a preset range, and if so, executing the step of judging whether the gesture of the target object is a preset gesture according to a preset judging rule.
7. A control device comprising one or more storage media and one or more processors in communication with the storage media, the one or more storage media storing processor-executable machine-executable instructions that, when executed by an electronic device, the processor executes the machine-executable instructions to perform the travel control method of any of claims 1-5.
8. A machine-readable storage medium storing machine-executable instructions that when executed implement the travel control method of any one of claims 1-5.
9. A mobile device comprising an image capturing apparatus, a power apparatus, and the control apparatus of claim 7, the control apparatus being connected to the image capturing apparatus and the power apparatus.
CN201911192556.4A 2019-11-28 2019-11-28 Travel control method, apparatus, device, readable storage medium, and mobile apparatus Active CN110955243B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911192556.4A CN110955243B (en) 2019-11-28 2019-11-28 Travel control method, apparatus, device, readable storage medium, and mobile apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911192556.4A CN110955243B (en) 2019-11-28 2019-11-28 Travel control method, apparatus, device, readable storage medium, and mobile apparatus

Publications (2)

Publication Number Publication Date
CN110955243A CN110955243A (en) 2020-04-03
CN110955243B true CN110955243B (en) 2023-10-20

Family

ID=69978751

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911192556.4A Active CN110955243B (en) 2019-11-28 2019-11-28 Travel control method, apparatus, device, readable storage medium, and mobile apparatus

Country Status (1)

Country Link
CN (1) CN110955243B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113486757B (en) * 2021-06-29 2022-04-05 北京科技大学 Multi-person linear running test timing method based on human skeleton key point detection
CN113679298B (en) * 2021-08-27 2022-05-10 美智纵横科技有限责任公司 Robot control method, robot control device, robot, and readable storage medium

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102915521A (en) * 2012-08-30 2013-02-06 中兴通讯股份有限公司 Method and device for processing mobile terminal images
CN103049735A (en) * 2011-10-14 2013-04-17 株式会社理光 Method for detecting particular object in image and equipment for detecting particular object in image
CN106558224A (en) * 2015-09-30 2017-04-05 徐贵力 A kind of traffic intelligent monitoring and managing method based on computer vision
CN108073864A (en) * 2016-11-15 2018-05-25 北京市商汤科技开发有限公司 Target object detection method, apparatus and system and neural network structure
CN108875730A (en) * 2017-05-16 2018-11-23 中兴通讯股份有限公司 A kind of deep learning sample collection method, apparatus, equipment and storage medium
CN109109857A (en) * 2018-09-05 2019-01-01 深圳普思英察科技有限公司 A kind of unmanned vendors' cart and its parking method and device
CN109218695A (en) * 2017-06-30 2019-01-15 中国电信股份有限公司 Video image enhancing method, device, analysis system and storage medium
CN109447005A (en) * 2018-11-01 2019-03-08 珠海格力电器股份有限公司 A kind of gesture identification method, device, storage medium and electric appliance
CN109671090A (en) * 2018-11-12 2019-04-23 深圳佑驾创新科技有限公司 Image processing method, device, equipment and storage medium based on far infrared
CN109871800A (en) * 2019-02-13 2019-06-11 北京健康有益科技有限公司 A kind of estimation method of human posture, device and storage medium
CN110225335A (en) * 2019-06-20 2019-09-10 中国石油大学(北京) Camera stability assessment method and device
CN110428442A (en) * 2019-08-07 2019-11-08 北京百度网讯科技有限公司 Target determines method, targeting system and monitoring security system
CN110443167A (en) * 2019-07-23 2019-11-12 中国建设银行股份有限公司 Intelligent identification Method, intelligent interactive method and the relevant apparatus of traditional culture gesture
CN110471526A (en) * 2019-06-28 2019-11-19 广东工业大学 A kind of human body attitude estimates the unmanned aerial vehicle (UAV) control method in conjunction with gesture identification
CN110490125A (en) * 2019-08-15 2019-11-22 成都睿晓科技有限公司 A kind of fueling area service quality detection system detected automatically based on gesture

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140358735A1 (en) * 2013-05-31 2014-12-04 Monscierge, LLC Modifying An Application To Display Branding Information Identifying A Particular Business

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103049735A (en) * 2011-10-14 2013-04-17 株式会社理光 Method for detecting particular object in image and equipment for detecting particular object in image
CN102915521A (en) * 2012-08-30 2013-02-06 中兴通讯股份有限公司 Method and device for processing mobile terminal images
CN106558224A (en) * 2015-09-30 2017-04-05 徐贵力 A kind of traffic intelligent monitoring and managing method based on computer vision
CN108073864A (en) * 2016-11-15 2018-05-25 北京市商汤科技开发有限公司 Target object detection method, apparatus and system and neural network structure
CN108875730A (en) * 2017-05-16 2018-11-23 中兴通讯股份有限公司 A kind of deep learning sample collection method, apparatus, equipment and storage medium
CN109218695A (en) * 2017-06-30 2019-01-15 中国电信股份有限公司 Video image enhancing method, device, analysis system and storage medium
CN109109857A (en) * 2018-09-05 2019-01-01 深圳普思英察科技有限公司 A kind of unmanned vendors' cart and its parking method and device
CN109447005A (en) * 2018-11-01 2019-03-08 珠海格力电器股份有限公司 A kind of gesture identification method, device, storage medium and electric appliance
CN109671090A (en) * 2018-11-12 2019-04-23 深圳佑驾创新科技有限公司 Image processing method, device, equipment and storage medium based on far infrared
CN109871800A (en) * 2019-02-13 2019-06-11 北京健康有益科技有限公司 A kind of estimation method of human posture, device and storage medium
CN110225335A (en) * 2019-06-20 2019-09-10 中国石油大学(北京) Camera stability assessment method and device
CN110471526A (en) * 2019-06-28 2019-11-19 广东工业大学 A kind of human body attitude estimates the unmanned aerial vehicle (UAV) control method in conjunction with gesture identification
CN110443167A (en) * 2019-07-23 2019-11-12 中国建设银行股份有限公司 Intelligent identification Method, intelligent interactive method and the relevant apparatus of traditional culture gesture
CN110428442A (en) * 2019-08-07 2019-11-08 北京百度网讯科技有限公司 Target determines method, targeting system and monitoring security system
CN110490125A (en) * 2019-08-15 2019-11-22 成都睿晓科技有限公司 A kind of fueling area service quality detection system detected automatically based on gesture

Also Published As

Publication number Publication date
CN110955243A (en) 2020-04-03

Similar Documents

Publication Publication Date Title
US11501523B2 (en) Goods sensing system and method for goods sensing based on image monitoring
US10753758B2 (en) Top-down refinement in lane marking navigation
CN108241844B (en) Bus passenger flow statistical method and device and electronic equipment
EP3620966A1 (en) Object detection method and apparatus for object detection
CN106952303B (en) Vehicle distance detection method, device and system
CN108734162B (en) Method, system, equipment and storage medium for identifying target in commodity image
CN108198044B (en) Commodity information display method, commodity information display device, commodity information display medium and electronic equipment
CN110163904A (en) Object marking method, control method for movement, device, equipment and storage medium
US11669972B2 (en) Geometry-aware instance segmentation in stereo image capture processes
EP3676784A1 (en) Analyzing images and videos of damaged vehicles to determine damaged vehicle parts and vehicle asymmetries
CN109154981A (en) Road plane output with cross fall
CN110955243B (en) Travel control method, apparatus, device, readable storage medium, and mobile apparatus
CN110717918B (en) Pedestrian detection method and device
CN110197106A (en) Object designation system and method
CN113158833B (en) Unmanned vehicle control command method based on human body posture
CN114898249B (en) Method, system and storage medium for confirming number of articles in shopping cart
CN112784814B (en) Gesture recognition method for reversing and warehousing of vehicle and reversing and warehousing guiding system of conveying vehicle
JP2021111273A (en) Learning model generation method, program and information processor
JP2017224148A (en) Human flow analysis system
CN113255444A (en) Training method of image recognition model, image recognition method and device
US20210350142A1 (en) In-train positioning and indoor positioning
CN107767366B (en) A kind of transmission line of electricity approximating method and device
JP2014062415A (en) Trajectory detector and trajectory monitoring device
CN114882363A (en) Method and device for treating stains of sweeper
CN112489240B (en) Commodity display inspection method, inspection robot and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant