CN109740462B - Target identification following method - Google Patents

Target identification following method Download PDF

Info

Publication number
CN109740462B
CN109740462B CN201811572824.0A CN201811572824A CN109740462B CN 109740462 B CN109740462 B CN 109740462B CN 201811572824 A CN201811572824 A CN 201811572824A CN 109740462 B CN109740462 B CN 109740462B
Authority
CN
China
Prior art keywords
following
information
position information
vehicle
image information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811572824.0A
Other languages
Chinese (zh)
Other versions
CN109740462A (en
Inventor
张德兆
王肖
张放
李晓飞
霍舒豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Idriverplus Technologies Co Ltd
Original Assignee
Beijing Idriverplus Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Idriverplus Technologies Co Ltd filed Critical Beijing Idriverplus Technologies Co Ltd
Priority to CN201811572824.0A priority Critical patent/CN109740462B/en
Publication of CN109740462A publication Critical patent/CN109740462A/en
Application granted granted Critical
Publication of CN109740462B publication Critical patent/CN109740462B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a target identification following method, which comprises the following steps: acquiring multi-frame image information within a preset time length acquired by an acquisition device of a vehicle within a polling range; determining a suspicious object; when the state is a moving state, determining the suspicious object as a following target; determining that the following mode selection information is fixed following or random following according to the map data; when the follow-up path is fixed, determining a plurality of position information of the follow-up target according to the multi-frame image information and the map data and generating a first follow-up path; when the distance between the vehicle and the following target is not larger than a preset distance threshold value, calculating angle information; when the holder rotates, the holder follows according to the first following path; when the random follow is adopted, predicting a second follow path; and when the time difference value is not greater than a preset time threshold value, following according to the second following path. Therefore, data of the automatic driving vehicle are utilized, the security effect is achieved, and security investment is saved.

Description

Target identification following method
Technical Field
The invention relates to the technical field of security protection, in particular to a target identification following method.
Background
In the prior art, in order to perform security protection, face recognition is often performed through arranging a camera and data collected by the camera, so that abnormal personnel are recognized. However, the method has the defects of huge cost, dead angle monitoring and the like.
The unmanned equipment senses the road environment through the vehicle-mounted sensing system, automatically plans a driving route and controls the vehicle to reach a preset target. The vehicle-mounted sensor can sense the surrounding environment of the vehicle, and control the steering and speed of the vehicle according to the road, vehicle position and obstacle information obtained by sensing, so that the vehicle can safely and reliably run on the road.
Existing unmanned vehicles generate a large amount of data during walking, and the data is only used for evaluating the performance of the unmanned vehicles, but has no other purpose.
Therefore, how to utilize the data of the unmanned equipment and save the urban security cost, and when the unmanned equipment and the urban security cost are combined, a following mode can be intelligently generated, so that the problem to be solved is urgently solved.
Disclosure of Invention
An embodiment of the present invention provides a data processing method to solve the problems in the prior art.
In order to solve the above problem, the present invention provides a method for identifying and following an object, the method comprising:
acquiring multi-frame image information within a preset time length acquired by an acquisition device of a vehicle within a polling range; each frame of image information comprises time information for acquiring the image information;
processing the image information to determine a suspicious object; the suspicious object comprises a suspicious item or an enforcer of a suspicious event;
determining the state of the suspicious object according to the time information; the state comprises a moving state or a static state;
when the state is a moving state, determining the suspicious object as a following target;
acquiring the position information of the vehicle and map data corresponding to the position information of the vehicle;
determining that the following mode selection information is fixed following or random following according to the map data;
when the following mode selection information is fixed following, determining a plurality of position information of the following target according to the multi-frame image information and the map data;
generating a first following path according to the plurality of position information of the following target;
when the following target is followed, calculating the distance between the vehicle and the following target according to the current position information of the vehicle, the position information of the following target and the first following path;
when the distance is not larger than a preset distance threshold value, calculating angle information between the vehicle and the following target according to the first following path and the position information of the vehicle;
generating a control signal according to the angle information and the current position information of the vehicle;
sending the control signal to a motor controller on a holder for driving the acquisition device to rotate, so that the motor controller controls the rotating speed of a motor according to the control signal and drives the acquisition device on the holder to rotate through the rotation of the motor;
after rotating, following according to the first following path;
when the following mode selection information is random following, determining a plurality of position information of the following target according to the multi-frame image information;
predicting a second following path of the following target according to the multi-frame image information and the map data;
calculating a time difference value between the vehicle and the following target according to the plurality of position information of the following target and the position information of the vehicle;
and when the time difference is not greater than a preset time threshold, following according to the second following path.
In a possible implementation manner, the processing the image information to determine a suspicious object specifically includes:
respectively matching the features in the image information with a feature library of a suspicious article and a suspect image library;
when the matching with the suspicious object feature library is successful, determining that the suspicious object is a suspicious object;
and when the matching with the suspect image library is successful, determining that the suspect object is an executor of the suspect event.
In a possible implementation manner, before the following is performed according to the first following path after the rotating, the method further includes:
when the difference value of the first angle information and the second angle information is larger than the deflection range of the holder, predicting a predicted track between the first position information and the second position information according to first position information corresponding to the first angle information and second position information corresponding to the second angle information; the first angle information is an included angle between the vehicle and first position information in the plurality of position information of the following target, the second angle information is an included angle between the vehicle and adjacent position information of the first position information, the first image information corresponds to the first position information, and the second image information corresponds to the adjacent position information of the first position information;
and splicing other position information except the first position information and the adjacent position information of the first position information with the predicted track to obtain a first following path of the following target.
In a possible implementation manner, the predicting a second following path of the following target according to the multiple frames of image information and the map data specifically includes:
processing the image information to determine the motion and/or the facial tiny features of the following target;
predicting the next action of the following target according to the action and/or the facial tiny features;
and predicting the track of the following target within a preset time according to the next action and the map data.
In one possible implementation, when following according to the first following path or the second following path, the method further includes:
acquiring real-time image information of the following target;
matching the features in the real-time image information with a suspected article feature library and a suspected image feature library to generate a matching result;
and analyzing the matching result, and sending the real-time image information, the matching result and the current position information of the vehicle to a third-party server when the matching with at least one of the suspicious article feature library or the suspicious image feature library is successful.
In a possible implementation manner, when the following mode selection information is a fixed following mode, determining a plurality of pieces of position information of the following target according to the multi-frame image information and the map data specifically includes:
processing the multi-frame image information to acquire environmental data in each frame of image information in the multi-frame image information;
and fitting the environment data and the map data, and determining the position information of the following target according to a fitting result.
In one possible implementation, the method further includes:
and when the state is a static state, sending the image information and the suspicious object to a third-party server.
In a possible implementation manner, after the following is performed according to the following path after the acquisition device rotates, the method further includes:
when the distance is not smaller than a preset distance threshold value, generating alarm information, wherein the alarm information comprises current previous image information;
and sending the alarm information to a server and/or a third-party server so that the server and/or the third-party server processes the current previous image information.
By applying the target identification following method provided by the invention, in the unmanned equipment, the suspected object is determined by utilizing the image information acquired by the acquisition device, the suspected object in a moving state is determined as a following target, different following modes are generated according to the map data when the following target is locked, and the following target is followed in each following mode, so that the data of the automatic driving vehicle is utilized, the security effect is achieved, and the security investment is saved.
Drawings
Fig. 1 is a schematic flow chart of a target identification following method according to an embodiment of the present invention.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be further noted that, for the convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Fig. 1 is a schematic flow chart of a target identification following method according to an embodiment of the present invention. The execution subject of the method may be a control unit of an autonomous vehicle. A vehicle control unit may be understood as a control module for controlling the travel of a vehicle. The control unit is a data processing center of the unmanned vehicle and can perform autonomous decision, path planning and the like. The identification following method is applied to unmanned scenes, in particular to unmanned vehicles, and particularly to unmanned vehicles in cities (non-closed-loop parks). Therefore, the data of the unmanned equipment can be utilized, and the urban security and protection cost can be saved.
As shown in fig. 1, the method comprises the steps of:
step 101, acquiring multi-frame image information within a preset time length acquired by an acquisition device of a vehicle within an inspection range; each frame of image information includes time information at which the image information is acquired.
Specifically, in order to save human resources, the automatic driving vehicle can be utilized for cleaning, and when the automatic cleaning is carried out, the structure of the vehicle can be utilized to take into account the safety of the road section for inspection. By way of example and not limitation, at some particular time period, such as, 00: 00-5: 00 the cleaning work is carried out in a time period with less pedestrians, and the cleaning vehicle can carry out patrol on the cleaning road section while cleaning.
Specifically, the vehicle is provided with a collecting device which can be a binocular camera. The binocular camera can be used for acquiring video information of a road section where the vehicle passes, processing the video data, and extracting multi-frame image information from the video data. Each frame of image information includes time information.
102, processing image information to determine a suspicious object; the suspicious object includes a suspicious item or an enforcer of a suspicious event.
Specifically, certain processing, such as feature extraction, is performed on the image information. And respectively matching the extracted features with the suspicious object feature library and the suspect image feature library, wherein the generated matching result can be matching degree, and when the matching degree is greater than a preset threshold value, the matching success can be determined, namely the suspicious object in the image information is confirmed to be a suspect, or the executor of the suspicious event is a suspect in the suspect image feature library.
By way of example and not limitation, the suspicious object may be some control device, a package being carried away, etc., and the suspicious action may be suspicious such as holding a child away by force, etc.
Furthermore, besides installing binocular cameras on the vehicle, various radars such as a laser radar are also installed, and the laser radar can acquire laser point cloud data. The contour of the suspicious object or the contour of the face can be determined through laser point cloud data of the radar, and the contour of the suspicious object or the contour of the face is matched with the features in the image information, so that the accuracy of the image is further improved.
Wherein the image information may contain both suspicious items and suspicious events.
Further, in order to improve the matching accuracy, after the matching is successful, secondary matching may be performed, for example, the image information that is successfully matched currently, the suspicious object in the suspicious object feature library and/or the suspicious person in the suspicious image feature library may be matched with another more accurate feature library in the vehicle, or sent to a server, and the server performs matching, and determines the suspicious object after the secondary matching is successful.
And 103, determining the state of the suspicious object according to the time information.
Wherein the state comprises a moving state or a static state. And determining whether the suspicious object is in a moving state or a static state according to the time information of each frame of image. When the suspicious object is in a static state, the information of the two matching times and the acquired image information can be directly reported to the server.
The server may be a third party server, or may be a server of some organization, such as a regulatory agency that regulates missing persons. Therefore, the third-party server can conveniently utilize the information to perform security work. The security protection cost is saved, the security protection range is expanded, and the security protection can be performed even in the area where the cameras are not arranged.
And 104, when the state is the moving state, determining the suspicious object as a following target.
And step 105, acquiring the position information of the vehicle and map data corresponding to the position information of the vehicle.
Specifically, the position information of the vehicle itself may be acquired by a positioning module on the vehicle, such as a Global Positioning System (GPS). The position information can also be obtained by sending the query message to the server and analyzing the response message carrying the position information sent by the server.
When a vehicle is at a certain location, a map of the location may be loaded, for example, when the vehicle is on street a, a map of city a, which is the upper unit of street a, may be loaded. How to load the vehicle may be downloading from a server or loading in advance by the vehicle, which is not limited in the present application.
The position information comprises longitude and latitude data, driving direction information and time information.
And step 106, determining the following mode selection information to be fixed following or random following according to the map data.
Specifically, the control unit automatically analyzes the terrain in the map data, for example, the tracking difficulty can be analyzed, the tracking difficulty is matched with a prestored difficulty table, and the following mode is automatically selected. For example, map data is analyzed, the current position is plain, the road is flat, the number of buildings is small, the tracking difficulty is 50%, in the difficulty table, the following mode corresponding to the difficulty is fixed following, fixed following is output, and subsequently, the fixed following mode can be adopted for following. The position that locates at present is that the slope is big, and buckles the road a lot, and the street that the building is also many, and the tracking degree of difficulty is 70%, then through seeking the difficulty table, and the mode of following that this degree of difficulty corresponds is followed for random, then the output is followed at random, and is follow-up, can adopt the mode of following at random to follow.
And step 107, when the following mode selection information is fixed following, determining a plurality of position information of the following target according to the multi-frame image information and the map data.
Specifically, the position information of the following target may be acquired by processing the acquired image information.
Each frame of image information can be processed first, and environmental data in each frame of image information is acquired;
and fitting the environmental data and preset map data, and determining a plurality of position information of the following target according to a fitting result.
The image information includes environmental data such as building identification, traffic identification, road identification, and the like.
After the environment data and the map data are fitted, the same characteristics of the environment data and the map data can be comprehensively processed, and the position information of the following target can be calculated.
And 108, generating a first following path according to the plurality of position information of the following targets.
Specifically, according to the time information, the position information is spliced to generate an original first following path of the following target.
The original first following path may be any one or any combination of a straight line, a curve or a broken line, the curve and the broken line may be calculated, and then the curvature is compared with the reciprocal of the minimum conversion radius of the vehicle, when the curvature is larger than the reciprocal of the minimum conversion radius of the vehicle, smoothing is performed to obtain the first following path, and the vehicle may travel according to the following path.
Wherein the minimum transfer radius of the vehicle is a known parameter in the vehicle. The smoothing process can be performed by interpolation, which is not described herein.
And step 109, when the following target is followed, calculating the distance between the vehicle and the following target according to the current position information of the vehicle, the plurality of position information of the following target and the first following path.
Specifically, a plurality of position information of the following target can be determined by using the image information, and at this time, the distance between the vehicle and the following target can be determined by combining the position information of the vehicle and the first following path.
And step 110, when the distance is not greater than a preset distance threshold, calculating angle information between the vehicle and the following target according to the first following path and the position information of the vehicle.
And step 111, generating a control signal according to the angle information and the current position information of the vehicle.
Specifically, when the distance between the vehicle and the following target is smaller than a preset distance threshold, it is indicated that the following target is within the trackable range, and at this time, the angle information between the vehicle and the following target can be calculated in real time according to the first following path and the position information of the following target. The angle information may be an angle between a line connecting the origin and the destination and a horizontal line passing through the center of gravity of the vehicle, with the vehicle as the origin and the following target as the destination.
During the running process of the vehicle, the current speed information of the vehicle can be acquired through the differential GPS, and the decision can be made through the target obstacle information, so that the steering information is generated.
After the angle information of the vehicle and the following target is known, the current steering information and the current speed information are combined for calculation, and a control signal containing the rotating speed and the number of turns of the motor is obtained.
And 112, sending the control signal to a motor controller on the holder for driving the acquisition device to rotate, so that the motor controller controls the rotating speed of the motor according to the control signal, and driving the acquisition device on the holder to rotate through the rotation of the motor.
Specifically, the cloud platform can drive collection system and rotate, and the cloud platform rotates under the drive of motor to drive collection system and rotate, can be according to control signal, through the rotational speed and the number of turns of motor controller control motor, in order to realize that the motor drives the cloud platform, the cloud platform drives collection system and rotates, guarantees to follow the target and is in collection system's capture within range always.
Wherein, collection system can have two mesh cameras, and the cloud platform can be the camera cloud platform that has two mesh cameras.
Specifically, when the motor drives the holder to rotate to an ideal angle, the vehicle follows along a first following path.
It can be understood that when the vehicle advances along the following path, the distance between the vehicle and the following target is calculated in real time according to the image information, and the pan-tilt rotation is carried out in real time to ensure that the following target is always within the capture range of the acquisition device.
Further, when the difference value between the first angle information and the second angle information is larger than the deflection range of the holder, predicting a predicted track between the first position information and the second position information according to the first position information corresponding to the first angle information and the second position information corresponding to the second angle information; the first angle information is an included angle between the vehicle and first position information in the plurality of position information of the following target, the second angle information is an included angle between the vehicle and adjacent position information of the first position information, the first image information corresponds to the first position information, and the second image information corresponds to the adjacent position information of the first position information;
and splicing other position information except the first position information and the adjacent position information of the first position information with the predicted track to obtain a first following path of the following target.
Specifically, in the actual rotation of the pan/tilt head, there is a dead angle, that is, the rotation range of the pan/tilt head is limited, for example, the rotation range of the pan/tilt head is 5 ° to 355 °, if the angle information of the rotation of the pan/tilt head is not within the pan/tilt head deflection range, the track when the following target is outside the image capturing range can be predicted by the track prediction.
Therein, by way of example and not limitation, a trajectory between two locations may be predicted using a Gaussian mixture model.
And 113, following according to the first following path after rotation.
Specifically, when the motor drives the holder to rotate to an ideal angle, the vehicle follows along the following path.
It can be understood that when the vehicle advances along the following path, the distance between the vehicle and the following target is calculated in real time according to the image information, and the pan-tilt rotation is carried out in real time to ensure that the following target is always within the capture range of the acquisition device.
And step 114, when the following mode selection information is random following, determining a plurality of position information of the following target according to the multi-frame image information.
And step 115, predicting a second following path of the following target according to the multi-frame image information and the map data.
Specifically, the vehicle may perform analysis processing according to the acquired image information to obtain a motion of the following target, such as a swing amplitude of a hand, walking or running, and a small feature of a face, such as a direction of a line of sight and a direction of head deflection, and then predict a next motion of the following target according to the motion and the small feature of the face, and finally predict a trajectory of the following target within a certain time period according to the next motion and map data, where the trajectories may form a second following path.
And step 116, calculating a time difference value between the vehicle and the following target according to the plurality of position information of the following target and the position information of the vehicle.
Specifically, according to the current position information of the vehicle and the speed information of the vehicle, and by combining the predicted track of the following target in a certain time length, the time difference between the vehicle and the predicted track, that is, the predicted time length for the vehicle to reach each point of the predicted track, is calculated.
And step 117, when the time difference value is not greater than the preset time threshold value, following according to the second following path.
Specifically, when the vehicle and the following target are within a certain time difference range, the vehicle can run according to the predicted track, and the environmental perception data is acquired in real time to follow in the process of running along the predicted track.
The method can also be applied to other mobile devices, such as a robot, and the robot can also perform cleaning work.
Further, after step 117, the method further includes:
when the distance is not smaller than a preset distance threshold value, generating alarm information, wherein the alarm information comprises current previous image information;
and sending the alarm information to a server and/or a third-party server so that the server and/or the third-party server processes the current previous image information.
Specifically, if the distance between the following target and the vehicle exceeds the distance threshold, the vehicle may generate alarm information, and send the alarm information to the server and/or the third-party server, where the alarm information may include image information acquired when the distance threshold is exceeded, and the server or the third-party server may process and analyze the image information and the position information.
By applying the target identification following method provided by the invention, in the unmanned equipment, the suspected object is determined by utilizing the image information acquired by the acquisition device, the suspected object in a moving state is determined as a following target, different following modes are generated according to the map data when the following target is locked, and the following target is followed in each following mode, so that the data of the automatic driving vehicle is utilized, the security effect is achieved, and the security investment is saved.
Those of skill would further appreciate that the various illustrative components and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied in hardware, a software module executed by a processor, or a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The above embodiments are provided to further explain the objects, technical solutions and advantages of the present invention in detail, it should be understood that the above embodiments are merely exemplary embodiments of the present invention and are not intended to limit the scope of the present invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (8)

1. A method of identification following of an object, the method comprising:
acquiring multi-frame image information within a preset time length acquired by an acquisition device of a vehicle in a polling range when cleaning work is carried out; each frame of image information comprises time information for acquiring the image information;
processing the image information to determine a suspicious object; the suspicious object comprises a suspicious item or an enforcer of a suspicious event;
determining the state of the suspicious object according to the time information;
when the state is a moving state, determining the suspicious object as a following target;
acquiring the position information of the vehicle and map data corresponding to the position information of the vehicle;
determining tracking difficulty according to the terrain in the map data, and determining that the following mode selection information is fixed following or random following according to the tracking difficulty and a preset difficulty table;
when the following mode selection information is fixed following, determining a plurality of position information of the following target according to the multi-frame image information and the map data;
generating a first following path according to the plurality of position information of the following target;
when the following target is followed, calculating the distance between the vehicle and the following target according to the current position information of the vehicle, the position information of the following target and the first following path;
when the distance is not larger than a preset distance threshold value, calculating angle information between the vehicle and the following target according to the first following path and the position information of the vehicle;
generating a control signal according to the angle information and the current position information of the vehicle;
sending the control signal to a motor controller on a holder for driving the acquisition device to rotate, so that the motor controller controls the rotating speed of a motor according to the control signal and drives the acquisition device on the holder to rotate through the rotation of the motor;
following according to the first following path;
when the following mode selection information is random following, determining a plurality of position information of the following target according to the multi-frame image information;
predicting a second following path of the following target according to the multi-frame image information and the map data;
calculating a time difference value between the vehicle and the following target according to the plurality of position information of the following target and the position information of the vehicle;
and when the time difference is not greater than a preset time threshold, following according to the second following path.
2. The method according to claim 1, wherein the processing the image information to determine a suspicious object specifically includes:
respectively matching the features in the image information with a feature library of a suspicious article and a suspect image library;
when the matching with the suspicious object feature library is successful, determining that the suspicious object is a suspicious object;
and when the matching with the suspect image library is successful, determining that the suspect object is an executor of the suspect event.
3. The method of claim 1, wherein after the rotation of the acquisition device on the pan/tilt head, before following according to the first following path, the method further comprises:
when the difference value of the first angle information and the second angle information is larger than the deflection range of the holder, predicting a predicted track between the first position information and the second position information according to first position information corresponding to the first angle information and second position information corresponding to the second angle information; the first angle information is an included angle between the vehicle and first position information in the plurality of position information of the following target, the second angle information is an included angle between the vehicle and adjacent position information of the first position information, the first image information corresponds to the first position information, and the second image information corresponds to the adjacent position information of the first position information;
and splicing other position information except the first position information and the adjacent position information of the first position information with the predicted track to obtain a first following path of the following target.
4. The method according to claim 1, wherein predicting the second following path of the following target based on the plurality of frames of image information and the map data specifically includes:
processing the image information to determine the motion and/or the facial tiny features of the following target;
predicting the next action of the following target according to the action and/or the facial tiny features;
and predicting the track of the following target within a preset time according to the next action and the map data.
5. The method of claim 1, wherein when following according to the first following path or a second following path, the method further comprises:
acquiring real-time image information of the following target;
matching the features in the real-time image information with a suspected article feature library and a suspected image feature library to generate a matching result;
and analyzing the matching result, and sending the real-time image information, the matching result and the current position information of the vehicle to a third-party server when the matching with at least one of the suspicious article feature library or the suspicious image feature library is successful.
6. The method according to claim 1, wherein when the following mode selection information is a fixed following, determining a plurality of position information of the following target according to the plurality of frames of image information and the map data specifically includes:
processing the multi-frame image information to acquire environmental data in each frame of image information in the multi-frame image information;
and fitting the environment data and the map data, and determining the position information of the following target according to a fitting result.
7. The method of claim 1, further comprising:
and when the state is a static state, sending the image information and the suspicious object to a third-party server.
8. The method of claim 1, wherein after the following according to the following path after the rotation of the collecting device, further comprising:
when the distance is not smaller than a preset distance threshold value, generating alarm information, wherein the alarm information comprises current previous image information;
and sending the alarm information to a server and/or a third-party server so that the server and/or the third-party server processes the current previous image information.
CN201811572824.0A 2018-12-21 2018-12-21 Target identification following method Active CN109740462B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811572824.0A CN109740462B (en) 2018-12-21 2018-12-21 Target identification following method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811572824.0A CN109740462B (en) 2018-12-21 2018-12-21 Target identification following method

Publications (2)

Publication Number Publication Date
CN109740462A CN109740462A (en) 2019-05-10
CN109740462B true CN109740462B (en) 2020-10-27

Family

ID=66361065

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811572824.0A Active CN109740462B (en) 2018-12-21 2018-12-21 Target identification following method

Country Status (1)

Country Link
CN (1) CN109740462B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110428603B (en) * 2019-07-26 2021-04-23 北京主线科技有限公司 Method and device for controlling following vehicle running in container truck formation
JP7274970B2 (en) * 2019-08-01 2023-05-17 本田技研工業株式会社 Tracking target identification system and tracking target identification method
CN110515095B (en) * 2019-09-29 2021-09-10 北京智行者科技有限公司 Data processing method and system based on multiple laser radars
CN111160420B (en) * 2019-12-13 2023-10-10 北京三快在线科技有限公司 Map-based fault diagnosis method, map-based fault diagnosis device, electronic equipment and storage medium
CN113841380A (en) * 2020-10-20 2021-12-24 深圳市大疆创新科技有限公司 Method, device, system, equipment and storage medium for determining target following strategy
CN112215209B (en) * 2020-11-13 2022-06-21 中国第一汽车股份有限公司 Car following target determining method and device, car and storage medium
CN114699046A (en) * 2022-04-25 2022-07-05 深圳市华屹医疗科技有限公司 Sleep monitoring method, monitor and monitoring system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101635834A (en) * 2008-07-21 2010-01-27 侯荣琴 Automatic tracing identification system for artificial neural control
CN105023429A (en) * 2014-04-24 2015-11-04 上海汽车集团股份有限公司 Vehicle-used vehicle tracking method and device
CN106326240A (en) * 2015-06-18 2017-01-11 中兴通讯股份有限公司 An object moving path identifying method and system
CN106529466A (en) * 2016-11-03 2017-03-22 中国兵器工业计算机应用技术研究所 Unmanned vehicle path planning method and unmanned vehicle path planning system based on bionic eye
CN107544506A (en) * 2017-09-27 2018-01-05 上海有个机器人有限公司 Robot follower method, robot and storage medium
CN108958264A (en) * 2018-08-03 2018-12-07 北京智行者科技有限公司 Road traffic checking method and vehicle based on automatic Pilot technology

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103389733A (en) * 2013-08-02 2013-11-13 重庆市科学技术研究院 Vehicle line walking method and system based on machine vision
KR101793223B1 (en) * 2016-07-13 2017-11-03 모바일 어플라이언스 주식회사 Advanced driver assistance apparatus

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101635834A (en) * 2008-07-21 2010-01-27 侯荣琴 Automatic tracing identification system for artificial neural control
CN105023429A (en) * 2014-04-24 2015-11-04 上海汽车集团股份有限公司 Vehicle-used vehicle tracking method and device
CN106326240A (en) * 2015-06-18 2017-01-11 中兴通讯股份有限公司 An object moving path identifying method and system
CN106529466A (en) * 2016-11-03 2017-03-22 中国兵器工业计算机应用技术研究所 Unmanned vehicle path planning method and unmanned vehicle path planning system based on bionic eye
CN107544506A (en) * 2017-09-27 2018-01-05 上海有个机器人有限公司 Robot follower method, robot and storage medium
CN108958264A (en) * 2018-08-03 2018-12-07 北京智行者科技有限公司 Road traffic checking method and vehicle based on automatic Pilot technology

Also Published As

Publication number Publication date
CN109740462A (en) 2019-05-10

Similar Documents

Publication Publication Date Title
CN109740462B (en) Target identification following method
CN109686031B (en) Identification following method based on security
US11747822B2 (en) Mobile robot system and method for autonomous localization using straight lines extracted from visual images
US11885910B2 (en) Hybrid-view LIDAR-based object detection
CN109740461B (en) Object and subsequent processing method
Naphade et al. The 2018 nvidia ai city challenge
US20180349746A1 (en) Top-View Lidar-Based Object Detection
US20190310651A1 (en) Object Detection and Determination of Motion Information Using Curve-Fitting in Autonomous Vehicle Applications
CN109682388B (en) Method for determining following path
US11475671B2 (en) Multiple robots assisted surveillance system
US11403947B2 (en) Systems and methods for identifying available parking spaces using connected vehicles
CN107360394A (en) More preset point dynamic and intelligent monitoring methods applied to frontier defense video monitoring system
EP2264643A1 (en) Surveillance system and method by thermal camera
CN114530058A (en) Collision early warning method, device and system
CN111275957A (en) Traffic accident information acquisition method, system and camera
US20210397187A1 (en) Method and system for operating a mobile robot
CN115140034A (en) Collision risk detection method, device and equipment
CN109344776B (en) Data processing method
CN109740464B (en) Target identification following method
Roberts et al. Inertial navigation sensor integrated motion analysis for autonomous vehicle navigation
CN117494029B (en) Road casting event identification method and device
CN113825112B (en) Intelligent parking system and method based on Internet of things
KR100714646B1 (en) Camera location information acquisition system and method
CN117953645A (en) Intelligent security early warning method and system for park based on patrol robot
CN115359434A (en) Abnormal object monitoring method and equipment for vehicle-mounted monitoring

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder

Address after: B4-006, maker Plaza, 338 East Street, Huilongguan town, Changping District, Beijing 100096

Patentee after: Beijing Idriverplus Technology Co.,Ltd.

Address before: B4-006, maker Plaza, 338 East Street, Huilongguan town, Changping District, Beijing 100096

Patentee before: Beijing Idriverplus Technology Co.,Ltd.

CP01 Change in the name or title of a patent holder