CN113628237A - Trajectory tracking method, system, electronic device and computer readable medium - Google Patents

Trajectory tracking method, system, electronic device and computer readable medium Download PDF

Info

Publication number
CN113628237A
CN113628237A CN202010334554.0A CN202010334554A CN113628237A CN 113628237 A CN113628237 A CN 113628237A CN 202010334554 A CN202010334554 A CN 202010334554A CN 113628237 A CN113628237 A CN 113628237A
Authority
CN
China
Prior art keywords
target object
video
information
acquiring
initial image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010334554.0A
Other languages
Chinese (zh)
Inventor
杨鸣鹤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Lynxi Technology Co Ltd
Original Assignee
Beijing Lynxi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Lynxi Technology Co Ltd filed Critical Beijing Lynxi Technology Co Ltd
Priority to CN202010334554.0A priority Critical patent/CN113628237A/en
Publication of CN113628237A publication Critical patent/CN113628237A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

The present disclosure relates to a trajectory tracking method, system, electronic device, and computer readable medium. The method comprises the following steps: acquiring an initial image of a target object; acquiring a first video of the target object, and tracking the moving route of the target object based on the first video to generate track information; acquiring a second video of the target object based on the initial image, and generating supplementary information based on the second video; and adjusting the trajectory information based on the supplemental information to perform trajectory tracking of the target object. The trajectory tracking method, the system, the electronic device and the computer readable medium can accurately position the target object and determine the motion trajectory of the target object in a complex indoor environment.

Description

Trajectory tracking method, system, electronic device and computer readable medium
Technical Field
The present disclosure relates to the field of computer information processing, and in particular, to a trajectory tracking method, system, electronic device, and computer readable medium.
Background
The indoor positioning technology is used for assisting positioning of satellite positioning when satellite positioning cannot be used in an indoor environment, and the problems that satellite signals are weak and cannot penetrate through buildings when reaching the ground are solved. And finally, positioning the current position of the object. With the rapid development of times, the quality and efficiency of information services are improved, the interference degree is low, and the indoor positioning technology plays a very important role in the life work and scientific research of people. The indoor positioning technology is very practical, has a large expansion space, has a wide application range, and can realize quick positioning of personnel and articles in complex environments such as libraries, gymnasiums, underground garages, goods warehouses, unmanned supermarkets, airports, railway stations and the like.
The indoor positioning technology mainly adopts multiple technologies such as wireless communication, base station positioning, inertial navigation positioning and the like to integrate and form an indoor position positioning system, thereby realizing the position monitoring of personnel, objects and the like in the indoor space. Common indoor positioning techniques include: Wi-Fi, Bluetooth, infrared, RFID, Zigbee, etc. However, the above indoor positioning techniques have low accuracy in positioning pedestrians and tracking in a complex internal environment.
Therefore, there is a need for a new trajectory tracking method, system, electronic device, and computer readable medium.
The above information disclosed in this background section is only for enhancement of understanding of the background of the disclosure and therefore it may contain information that does not constitute prior art that is already known to a person of ordinary skill in the art.
Disclosure of Invention
In view of the above, the present disclosure provides a trajectory tracking method, system, electronic device and computer readable medium, which can accurately locate a target object and determine a motion trajectory of the target object in a complex indoor environment.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows, or in part will be obvious from the description, or may be learned by practice of the disclosure.
According to an aspect of the present disclosure, a trajectory tracking method is provided, which includes: acquiring an initial image of a target object; acquiring a first video of the target object, and tracking the moving route of the target object based on the first video to generate track information; acquiring a second video of the target object based on the initial image, and generating supplementary information based on the second video; and adjusting the trajectory information based on the supplemental information to perform trajectory tracking of the target object.
In an exemplary embodiment of the present disclosure, acquiring an initial image of a target object includes:
acquiring an initial image of the target object acquired by an initial camera device, wherein the initial image comprises: a user face image and/or appearance image and/or gait image.
In an exemplary embodiment of the present disclosure, acquiring a first video of the target object, and tracking a moving route of the target object based on the first video to generate trajectory information includes: when the target object moves, acquiring a plurality of first videos generated by tracking and collecting the target object through a plurality of first camera devices; and generating the trajectory information based on the plurality of first videos.
In an exemplary embodiment of the present disclosure, acquiring a plurality of first videos generated by tracking and capturing the target object by a plurality of first cameras while the target object is moving includes: controlling one of the plurality of first camera devices to acquire a first video of the target object; determining the moving direction of the target object according to the first video; and controlling another first camera device to acquire another first video of the target object according to the moving direction.
In an exemplary embodiment of the present disclosure, generating the trajectory information based on the plurality of first videos includes: determining a plurality of movement routes of the target object in the plurality of first videos; the trajectory information is generated based on a positional relationship between the plurality of movement routes and the plurality of first image pickup devices.
In an exemplary embodiment of the present disclosure, acquiring a second video of the target object based on the initial image includes: acquiring a plurality of real-time videos acquired by a plurality of second camera devices; and performing target recognition on the plurality of real-time videos based on the initial image to determine the second video corresponding to the target object.
In an exemplary embodiment of the present disclosure, performing target recognition on the plurality of real-time videos based on the initial image to determine the second video corresponding to the target object includes: performing face recognition on the plurality of real-time videos based on the initial image to determine the second video corresponding to the target object; and/or performing video structural calculation on the plurality of real-time videos based on the initial image to determine the second video corresponding to the target object; and/or performing gait recognition on the plurality of real-time videos based on the initial image to determine the second video corresponding to the target object.
In an exemplary embodiment of the present disclosure, generating the supplementary information based on the second video includes: determining a real-time location of the target object based on the second video; and generating the supplementary information based on a positional relationship between the real-time position and the plurality of second image pickup devices.
In an exemplary embodiment of the present disclosure, further comprising: storing a plurality of historical track information of the target object; and analyzing and processing the behavior of the target object based on the plurality of historical track information.
In an exemplary embodiment of the present disclosure, further comprising: and generating early warning information when the track information of the target object meets a preset condition.
In an exemplary embodiment of the present disclosure, further comprising: acquiring characteristic information of the target object through an information acquisition device, wherein the characteristic information comprises: two-dimensional codes and/or radio frequency identification codes; and storing a plurality of historical track information of the target object according to the characteristic information.
According to an aspect of the present disclosure, a trajectory tracking system is provided, the system comprising: the initial camera device is used for acquiring an initial image of the target object; a plurality of first image pickup devices for acquiring a first video of the target object; a plurality of second camera devices for acquiring a second video; and the processing device is used for tracking the moving route of the target object based on the first video to generate track information, determining a second video of the target object based on the initial image, generating supplementary information according to the second video, and adjusting the track information based on the supplementary information to track the target object.
In an exemplary embodiment of the present disclosure, the plurality of first image pickup devices are disposed at predetermined positions according to a first rule, and the first image pickup devices include fisheye cameras.
In one exemplary embodiment of the present disclosure, the plurality of second image pickup devices are disposed at predetermined positions according to a second rule.
According to an aspect of the present disclosure, an electronic device is provided, the electronic device including: one or more processors; storage means for storing one or more programs; when executed by one or more processors, cause the one or more processors to implement a method as above.
According to an aspect of the disclosure, a computer-readable medium is proposed, on which a computer program is stored, which program, when being executed by a processor, carries out the method as above.
According to the trajectory tracking method, system, electronic device and computer readable medium of the present disclosure, an initial image of a target object is obtained; acquiring a first video of the target object, and tracking the moving route of the target object based on the first video to generate track information; acquiring a second video of the target object based on the initial image, and generating supplementary information based on the second video; and adjusting the trajectory information based on the supplementary information to track the trajectory of the target object, so that the target object can be accurately positioned and the motion trajectory of the target object can be determined in a complex indoor environment.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent by describing in detail exemplary embodiments thereof with reference to the attached drawings. The drawings described below are merely some embodiments of the present disclosure, and other drawings may be derived from those drawings by those of ordinary skill in the art without inventive effort.
FIG. 1 is a block diagram illustrating an application scenario of a trajectory tracking system in accordance with an exemplary embodiment.
FIG. 2 is a schematic diagram illustrating an application scenario of a trajectory tracking system according to an exemplary embodiment.
FIG. 3 is a flow chart illustrating a trajectory tracking method according to an exemplary embodiment.
FIG. 4 is a flow chart illustrating a trajectory tracking method according to another exemplary embodiment.
FIG. 5 is a flow chart illustrating a trajectory tracking method according to another exemplary embodiment.
FIG. 6 is a block diagram illustrating a trajectory tracking system in accordance with an exemplary embodiment.
FIG. 7 is a block diagram illustrating an electronic device in accordance with an example embodiment.
FIG. 8 is a block diagram illustrating a computer-readable medium in accordance with an example embodiment.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The same reference numerals denote the same or similar parts in the drawings, and thus, a repetitive description thereof will be omitted.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the subject matter of the present disclosure can be practiced without one or more of the specific details, or with other methods, components, systems, steps, and so forth. In other instances, well-known methods, systems, implementations, or operations have not been shown or described in detail to avoid obscuring aspects of the present disclosure.
The block diagrams shown in the figures are functional entities only and do not necessarily correspond to physically separate entities. I.e. these functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor means and/or microcontroller means.
The flow charts shown in the drawings are merely illustrative and do not necessarily include all of the contents and operations/steps, nor do they necessarily have to be performed in the order described. For example, some operations/steps may be decomposed, and some operations/steps may be combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.
It will be understood that, although the terms first, second, third, etc. may be used herein to describe various components, these components should not be limited by these terms. These terms are used to distinguish one element from another. Thus, a first component discussed below may be termed a second component without departing from the teachings of the disclosed concept. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
It is to be understood by those skilled in the art that the drawings are merely schematic representations of exemplary embodiments, and that the blocks or processes shown in the drawings are not necessarily required to practice the present disclosure and are, therefore, not intended to limit the scope of the present disclosure.
As described above, common indoor positioning techniques are: Wi-Fi, Bluetooth, infrared, RFID, Zigbee, etc. The inventor of the present disclosure finds that the indoor positioning technology has the following disadvantages in practical application:
Wi-Fi positioning technology: the WIFI access point usually only can cover an area with a radius of about 90 meters, and is easily interfered by other signals, so that the precision of the WIFI access point is influenced, and the energy consumption of the locator is high.
Bluetooth positioning: for a complex space environment, the stability of the bluetooth positioning system is slightly poor, and the interference of noise signals is large.
Infrared technology: indoor location is through installing indoor optical sensor, receives each mobile device (infrared ray IR sign) and transmits the infrared ray of modulation and fix a position, however, because light can not pass the barrier for infrared ray can only look the distance and spread, receives other light interference easily, and the transmission distance of infrared ray is shorter, makes its indoor effect of fixing a position very poor. When the mobile device is placed in a pocket or is shielded by a wall, the mobile device cannot work normally, and a receiving antenna needs to be installed in each room or corridor, so that the overall cost is high.
The RFID positioning technology utilizes a radio frequency mode to perform non-contact bidirectional communication to exchange data, and achieves the purposes of mobile equipment identification and positioning. Has the disadvantages of short action distance and the like.
In order to overcome the defects in the existing indoor positioning technology, the invention provides a novel track tracking method and a novel track tracking system, and the method and the system can realize positioning tracking of people and determine the motion track by the aid of three types of cameras. The following detailed description is given in conjunction with specific examples.
FIG. 1 is a block diagram illustrating an application scenario of a trajectory tracking system in accordance with an exemplary embodiment.
As shown in fig. 1, the system architecture 10 may include image sensing apparatuses 101, 102, 103, a network 104, and a processing device 105. The network 104 is a medium to provide a communication link between the image pickup apparatuses 101, 102, 103 and the processing device 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The image pickup apparatuses 101, 102, 103 can be used to interact with the processing device 105 through the network 104 to receive or transmit data or the like. Various communication client applications may be installed on the image pickup apparatuses 101, 102, 103.
The camera devices 101, 102, 103 may be various electronic devices having a camera function, including, but not limited to, professional cameras, CCD cameras, web cameras, broadcast-grade cameras, business-grade cameras, home-grade cameras, studio/field-based cameras, camcorders, monochrome cameras, color cameras, infrared cameras, X-ray cameras, surveillance cameras, scout cameras, button cameras, and the like.
The image capturing apparatus 101 may be an initial image capturing device for acquiring an initial image of a target object.
The camera apparatus 102 may include a plurality of first camera devices for acquiring a first video of the target object.
The image capturing apparatus 103 may include a plurality of second image capturing devices for acquiring a second video of the target object based on the initial image.
The processing device 105 may be a processing device that provides various services, such as a processing device that performs background processing on pictures or videos taken by the image capturing apparatuses 101, 102, 103. The processing device 105 may analyze and otherwise process the received data such as the picture or video, and feed back the processing result (e.g., the trajectory of the target object) to the administrator.
The processing device 105 may, for example, acquire an initial image of the target object; the processing device 105 may, for example, acquire a first video of the target object, and generate trajectory information by tracking the moving route of the target object based on the first video; the processing device 105 may, for example, obtain a second video of the target object based on the initial image and generate supplemental information based on the second video; the processing device 105 may adjust the trajectory information, for example, based on the supplemental information, to perform trajectory tracking of the target object.
The processing device 105 may be a physical processing device, and may also be composed of a plurality of processing devices, for example, it should be noted that the trajectory tracking method provided by the embodiments of the present disclosure may be executed by the image capturing apparatuses 101, 102, 103 and the processing device 105, and accordingly, a trajectory tracking system may be disposed in the image capturing apparatuses 101, 102, 103 and the processing device 105. The processing device 105 may control the image capturing apparatuses 101, 102, and 103 to capture data and transmit and receive data, or the image capturing apparatuses 101, 102, and 103 may actively capture data and transmit the captured data to the processing device 105 for processing.
The processing device 105 may be a server or a terminal device, and the terminal device may include at least one camera device, which is not limited by the present disclosure. In one embodiment, the method in the present disclosure may be supported by designating any one of the image capturing apparatuses 101, 102, 103 (for example, the image capturing apparatus 103) as a processing device, in which case, the image capturing apparatus 103 may receive video data sent by the image capturing apparatuses 101, 102 in addition to capturing a current video image, and process the video data according to the method described in the embodiment of the present disclosure to track a user, which is not limited by the present disclosure.
FIG. 2 is a schematic diagram illustrating an application scenario of a trajectory tracking system according to an exemplary embodiment. As shown in fig. 2, the initial image capturing device may include a camera L and an information collecting device; the first camera device can be an image acquisition device 1-19; the second camera may be an image capture device a-K.
The camera L is used for identifying entrance personnel, and for example, information can be collected at the entrance gate through the camera L, a two-dimensional code, an RFID card and the like. Various types of characteristic information of users entering an indoor area are collected, and corresponding relations of the various types of characteristic information are established. For example, in an unmanned supermarket scene, a user enters the unmanned supermarket through the two-dimensional code, the two-dimensional code device acquires the two-dimensional code information of the user, the camera acquires the user information, and the corresponding relation between the user information and the two-dimensional code is established.
Wherein the image capturing devices 1-19: the device is arranged on the indoor roof, for example, a plurality of fisheye cameras can be used for shooting downwards from the roof, and the top area characteristics of a moving object can be shot. For example, when the camera 1 gazes at the top of the head of a user, the camera 1 may track the motion track of the user and determine the moving direction of a moving target, and when the camera 1 determines that the user is about to enter the collectable area of the camera 2, the camera 2 predicting the area may be notified to switch to the camera 2 for tracking the track of the user, and the information of the moving target is synchronized with the track information of the target user formed by shooting the target by the camera 2, and so on, the tracking of the motion track of the user may be realized by the cameras 1 to 19.
Wherein the image acquisition devices A-K: and a plurality of cameras arranged at fixed positions and used for correcting the information of the moving target. When the cameras 1 to 19 installed on the roofs track a target object, problems such as loss of a tracking chain or tracking errors caused by shielding and the like can occur, the A-K cameras analyze image information of users by acquiring the image information of the users, and the specific user of the vertex is corrected or re-determined by algorithms such as face recognition, video structuring, gait and the like. For example, the camera 1 stares at the tops of two users at the same time, possibly confusing. The a-K type cameras can acquire images of users, and determine which two users the two tops of the head are respectively by combining various types of information (such as faces, clothes, gaits) of the users acquired by the previous camera L, and continue to track.
According to the track tracking method, an initial information initial image of a target object is obtained; acquiring a first video of the target object, and tracking the moving route of the target object based on the first video to generate track information; acquiring a second video of the target object based on the initial image, and generating supplementary information based on the second video; and adjusting the trajectory information based on the supplementary information to track the trajectory of the target object, so that the target object can be accurately positioned and the motion trajectory of the target object can be determined in a complex indoor environment.
FIG. 3 is a flow chart illustrating a trajectory tracking method according to an exemplary embodiment. The trajectory tracking method 30 includes steps S302 to S308. The trajectory tracking method is applicable to a server or a terminal device, wherein the terminal device may include at least one camera. In one embodiment, the method in the present disclosure may also be supported by designating any one of the image capturing apparatuses 101, 102, 103 (for example, the image capturing apparatus 103) as a terminal, in which case, the image capturing apparatus 103 may receive video data sent by the image capturing apparatuses 101, 102 in addition to capturing a current video image, and process the video data according to the method described in the embodiment of the present disclosure to track a user, which is not limited by the present disclosure.
As shown in fig. 3, in S302, an initial image of the target object is acquired. The method comprises the following steps: acquiring an initial image of the target object by an initial imaging device, may further include: acquiring feature information of the target object by an information acquisition device, wherein the feature information may include: two-dimensional codes and/or radio frequency identification codes; and associating and storing a plurality of historical track information of the target object according to the characteristic information.
Further, the user information may include a face, clothes, gait, and the like. For example, in an unmanned supermarket scene, a user enters the unmanned supermarket through a two-dimensional code, the two-dimensional code can be used as characteristic information of the user, a two-dimensional code device acquires two-dimensional code information of the user, a camera collects an image of the user, the image of the user can be used as an initial image, and a corresponding relation between the image of the user and the two-dimensional code is established in a background processing device.
In S304, a first video of the target object is acquired, and a moving route of the target object is tracked based on the first video to generate track information. Wherein the first video comprises at least one frame of image.
In one possible implementation, step S304 may include: when the target object moves, acquiring a plurality of first videos generated by tracking and collecting the target object through a plurality of first camera devices; and generating the trajectory information based on the plurality of first videos.
The details of "acquiring the first video of the target object and tracking the moving route of the target object based on the first video to generate the track information" will be described in the embodiment corresponding to fig. 4. The first cameras may track and capture a plurality of first videos generated by the target object, or a part of the first cameras track and capture the target object to generate one first video, and a part of the first cameras track and capture the target object to generate a plurality of first videos, which is not limited in this disclosure.
In S306, a second video of the target object is acquired based on the initial image, and supplemental information is generated based on the second video. Wherein the second video comprises at least one frame of image.
In one possible implementation, step S306 may include: acquiring a plurality of real-time videos acquired by a plurality of second camera devices; and performing target recognition on the plurality of real-time videos based on the initial image to determine the second video corresponding to the target object.
The details of "acquiring a second video of the target object based on the initial image, and generating supplemental information based on the second video" will be described in the embodiment corresponding to fig. 5.
In S308, the trajectory information is adjusted based on the supplemental information to perform trajectory tracking of the target object.
In one embodiment, further comprising: storing a plurality of historical track information of the target object; and analyzing and processing the behavior of the target object based on the plurality of historical track information.
In one embodiment, further comprising: and generating early warning information when the track information of the target object meets a preset condition.
In one embodiment, in an unmanned supermarket, a face image of a user collected by a camera can be stored in a processing device, when the user reappears, a data record of the user can be established through face image comparison, and according to the data record, the purchasing habits of the user can be continuously analyzed, such as the preferences of the user, the living habits of the user, the work and rest time of the user, and the like. Further, a warning may be given when conditions are met, for example, if a user appears in a supermarket too frequently, the user is considered to have a possible malicious attempt, and then a warning is given to the user so that the manager pays attention to the user.
According to the track tracking method disclosed by the invention, the positioning tracking of people can be realized and the motion track can be determined through the three cameras. Data analysis can be performed based on the acquired motion trajectory, and the analysis result can be used for early warning and the like.
It should be clearly understood that this disclosure describes how to make and use particular examples, but the principles of this disclosure are not limited to any details of these examples. Rather, these principles can be applied to many other embodiments based on the teachings of the present disclosure.
FIG. 4 is a flow chart illustrating a trajectory tracking method according to another exemplary embodiment. The flow 40 shown in fig. 4 is a detailed description of S304 "acquiring the first video of the target object and tracking the moving route of the target object based on the first video to generate track information" in the flow shown in fig. 3.
As shown in fig. 4, in S402, one of the plurality of first image capturing apparatuses is controlled to acquire one first video of the target object. A plurality of first camera device can arrange at indoor roof, and a plurality of first camera device can be a plurality of fisheye cameras, from the roof shooting downwards, and a plurality of first camera device still can be different types of camera, the work of mutually supporting, and this disclosure does not use this as the limit.
In S404, a moving direction of the target object is determined according to the first video. When the current first camera device shoots the user, the track direction of the user can be determined according to the distance range shot by the user in the current first camera device.
In S406, another first camera is controlled to acquire another first video of the target object according to the moving direction. And analyzing an image area to be entered by the user according to the moving direction to determine another first camera device in the image area to be performed, and switching to another first camera device to continue tracking shooting of the track of the user when the user enters the image area.
In S408, a plurality of movement trajectories of the target object in the plurality of first videos is determined.
In S410, the trajectory information is generated based on the positional relationship between the plurality of movement routes and the plurality of first image pickup devices. During the shooting process of the video, the track information of the user can be generated through the positions of the different first camera devices and the movement tracks of the user in different areas.
In one embodiment, for example, the area where each first camera device that the user passes through is located is marked, an approximate range of the user track is drawn, then the track of the user in each area is drawn, the tracks of adjacent areas are connected, and then the user track information is determined.
FIG. 5 is a flow chart illustrating a trajectory tracking method according to another exemplary embodiment. The flow 50 shown in fig. 5 is a detailed description of S306 "acquiring a second video of the target object based on the initial image and generating supplementary information based on the second video" in the flow shown in fig. 3.
As shown in fig. 5, in S502, a plurality of real-time videos captured by a plurality of second cameras are acquired. The second camera device can be arranged at a fixed position, is used for correcting the information of the moving target, and can be used for tracking the problems that the tracking chain of the user is lost or the tracking is wrong and the like due to the problems of shielding and the like. Wherein the real-time video comprises at least one frame of image.
In a possible implementation manner, the second video corresponding to the target object may be determined in at least one of steps S504, S506 and S508.
In S504, performing face recognition on the plurality of real-time videos based on the initial image to determine the second video corresponding to the target object. The face recognition is a biological recognition technology for identity recognition based on face feature information of a person. A series of related technologies, also commonly called face recognition and face recognition, are used to collect images or video streams containing faces by using a camera or a video camera, automatically detect and track the faces in the images, and then perform face recognition on the detected faces.
In S506, performing video structural calculation on the plurality of real-time videos based on the initial image to determine the second video corresponding to the target object. The video structuring refers to establishing a video big data structuring platform according to the characteristics of people, vehicles, objects, colors, numbers and other attributes presented in a video picture. After the video is structured, the video is stored in a corresponding structured data warehouse, and the storage capacity is greatly reduced. The person in the video image can be subjected to structural processing, and various structural characteristic attribute information of the user can be obtained, wherein the structural characteristic attribute information comprises clothing and ornament characteristics: coats, trousers, skirts and dresses, shoes, hats, sunglasses and sunglasses, scarves and belt waistbands; carrying object characteristics: single shoulder satchels, backpack, handbags, draw-bar boxes, umbrellas; human body characteristics: hair, face.
In S508, gait recognition is performed on the plurality of real-time videos based on the initial image to determine the second video corresponding to the target object. The gait recognition is a biological feature recognition technology, aims to identify the identity through the walking posture of people, and has the advantages of non-contact remote distance and difficulty in disguising compared with other biological recognition technologies. In the field of intelligent video monitoring, the method has more advantages than image recognition.
In S510, a real-time location of the target object is determined based on the second video.
In S512, the supplementary information is generated based on the real-time position and the positional relationship of the plurality of second image pickup devices.
Those skilled in the art will appreciate that all or part of the steps implementing the above embodiments are implemented as computer programs executed by a CPU. When executed by the CPU, performs the functions defined by the above-described methods provided by the present disclosure. The program may be stored in a computer readable storage medium, which may be a read-only memory, a magnetic or optical disk, or the like.
Furthermore, it should be noted that the above-mentioned figures are only schematic illustrations of the processes involved in the methods according to exemplary embodiments of the present disclosure, and are not intended to be limiting. It will be readily understood that the processes shown in the above figures are not intended to indicate or limit the chronological order of the processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, e.g., in multiple modules.
The following are embodiments of the disclosed apparatus that may be used to perform embodiments of the disclosed methods. For details not disclosed in the embodiments of the apparatus of the present disclosure, refer to the embodiments of the method of the present disclosure.
FIG. 6 is a block diagram illustrating a trajectory tracking system in accordance with an exemplary embodiment. As shown in fig. 6, the trajectory tracking system 60 includes: an initial image capturing device 602, a first image capturing device 604, a second image capturing device 606, and a processing device 608.
The initial camera 602 is used for acquiring an initial image of a target object;
the plurality of first cameras 604 are used for acquiring a first video of the target object; the plurality of first cameras 604 are arranged at predetermined positions according to a first rule, and the first cameras 604 comprise fisheye cameras
The plurality of second cameras 606 are used for acquiring a second video of the target object based on the initial image; the plurality of second image pickup devices 606 are arranged at predetermined positions according to a second rule
The processing device 608 is configured to track the moving route of the target object based on the first video to generate track information; and generating supplementary information based on the second video, and adjusting the track information based on the supplementary information to track the target object.
The first rule may be determined according to the area to be tracked and the acquisition range of the first camera device. For example, the area to be tracked is an indoor supermarket, and a plurality of first camera devices can be uniformly or non-uniformly distributed on the ceiling of the indoor supermarket according to the area of the indoor supermarket and the collection range of the first camera devices, so that the first camera devices can collect any area of the indoor supermarket. The second rule may be determined according to the area to be tracked and the acquisition range of the second camera. For example, the area to be tracked is an indoor supermarket, the second camera devices may be respectively arranged in a plurality of fixed areas of the indoor supermarket, and the area collected by the second camera devices may be an area with a high occurrence frequency of users in the indoor supermarket, so that the second video is collected by the second camera devices in the area, and the supplementary information is generated based on the second video, thereby correcting the track information of the target object.
According to the track tracking system disclosed by the invention, an initial image of a target object is obtained; acquiring a first video of the target object, and tracking the moving route of the target object based on the first video to generate track information; acquiring a second video of the target object based on the initial image, and generating supplementary information based on the second video; and adjusting the trajectory information based on the supplementary information to track the trajectory of the target object, so that the target object can be accurately positioned and the motion trajectory of the target object can be determined in a complex indoor environment.
FIG. 7 is a block diagram illustrating an electronic device in accordance with an example embodiment.
An electronic device 700 according to this embodiment of the disclosure is described below with reference to fig. 7. The electronic device 700 shown in fig. 7 is only an example and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 7, electronic device 700 is embodied in the form of a general purpose computing device. The components of the electronic device 700 may include, but are not limited to: at least one processing unit 710, at least one memory unit 720, a bus 730 that connects the various system components (including the memory unit 720 and the processing unit 710), a display unit 740, and the like.
Wherein the storage unit stores program codes executable by the processing unit 710 to cause the processing unit 710 to perform the steps according to various exemplary embodiments of the present disclosure described in the above-mentioned electronic prescription flow processing method section of the present specification. For example, the processing unit 710 may perform the steps as shown in fig. 3, 4, 5.
The memory unit 720 may include readable media in the form of volatile memory units, such as a random access memory unit (RAM)7201 and/or a cache memory unit 7202, and may further include a read only memory unit (ROM) 7203.
The memory unit 720 may also include a program/utility 7204 having a set (at least one) of program modules 7205, such program modules 7205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
Bus 730 may be any representation of one or more of several types of bus structures, including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
The electronic device 700 may also communicate with one or more external devices 700' (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a user to interact with the electronic device 700, and/or with any devices (e.g., router, modem, etc.) that enable the electronic device 700 to communicate with one or more other computing devices. Such communication may occur via an input/output (I/O) interface 750. Also, the electronic device 700 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network such as the internet) via the network adapter 760. The network adapter 760 may communicate with other modules of the electronic device 700 via the bus 730. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the electronic device 700, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, as shown in fig. 8, the technical solution according to the embodiment of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a processing apparatus, or a network device, etc.) to execute the above method according to the embodiment of the present disclosure.
The software product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The computer readable storage medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable storage medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, or device. Program code embodied on a readable storage medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or processing apparatus. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
The computer readable medium carries one or more programs which, when executed by a device, cause the computer readable medium to perform the functions of: acquiring an initial image of a target object; acquiring a first video of the target object, and tracking the moving route of the target object based on the first video to generate track information; acquiring a second video of the target object based on the initial image, and generating supplementary information based on the second video; and adjusting the trajectory information based on the supplemental information to perform trajectory tracking of the target object.
Those skilled in the art will appreciate that the modules described above may be distributed in the apparatus according to the description of the embodiments, or may be modified accordingly in one or more apparatuses unique from the embodiments. The modules of the above embodiments may be combined into one module, or further split into multiple sub-modules.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a processing device, a mobile terminal, or a network device, etc.) to execute the method according to the embodiments of the present disclosure.
Exemplary embodiments of the present disclosure are specifically illustrated and described above. It is to be understood that the present disclosure is not limited to the precise arrangements, instrumentalities, or instrumentalities described herein; on the contrary, the disclosure is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims.

Claims (15)

1. A trajectory tracking method, comprising:
acquiring an initial image of a target object;
acquiring a first video of the target object, and tracking the moving route of the target object based on the first video to generate track information;
acquiring a second video of the target object based on the initial image, and generating supplementary information based on the second video; and
adjusting the trajectory information based on the supplemental information to perform trajectory tracking of the target object.
2. The method of claim 1, wherein acquiring an initial image of a target object comprises:
acquiring an initial image of the target object acquired by an initial camera device, wherein the initial image comprises: a user face image and/or appearance image and/or gait image.
3. The method of claim 1, wherein obtaining a first video of the target object and tracking the moving route of the target object based on the first video to generate trajectory information comprises:
when the target object moves, acquiring a plurality of first videos generated by tracking and collecting the target object through a plurality of first camera devices; and
generating the trajectory information based on the plurality of first videos.
4. The method of claim 3, wherein acquiring a plurality of first videos generated by a tracking acquisition of the target object by a plurality of first cameras comprises:
controlling one of the plurality of first camera devices to acquire at least one first video of the target object;
determining a moving direction of the target object according to the at least one first video; and
and controlling another first camera device to acquire at least one first video of the target object according to the moving direction.
5. The method of claim 3 or 4, wherein generating the trajectory information based on the plurality of first videos comprises:
determining a plurality of movement routes of the target object in the plurality of first videos;
the trajectory information is generated based on a positional relationship between the plurality of movement routes and the plurality of first image pickup devices.
6. The method of claim 1, wherein acquiring a second video of the target object based on the initial image comprises:
acquiring a plurality of real-time videos acquired by a plurality of second camera devices; and
performing target recognition on the plurality of real-time videos based on the initial image to determine the second video corresponding to the target object.
7. The method of claim 6, wherein performing target recognition on the plurality of real-time videos based on the initial image to determine the second video corresponding to the target object comprises:
performing face recognition on the plurality of real-time videos based on the initial image to determine the second video corresponding to the target object; and/or
Performing video structural calculation on the plurality of real-time videos based on the initial image to determine the second video corresponding to the target object; and/or
Performing gait recognition on the plurality of real-time videos based on the initial image to determine the second video corresponding to the target object.
8. The method of claim 1, wherein generating supplemental information based on the second video comprises:
determining a real-time location of the target object based on the second video; and
the supplemental information is generated based on a positional relationship between the real-time position and the plurality of second image pickup devices.
9. The method of claim 1, further comprising:
acquiring characteristic information of the target object through an information acquisition device, wherein the characteristic information comprises: two-dimensional codes and/or radio frequency identification codes;
storing a plurality of historical track information of the target object according to the characteristic information; and
and analyzing and processing the behavior of the target object based on the plurality of historical track information.
10. The method of claim 1, further comprising:
and generating early warning information when the track information of the target object meets a preset condition.
11. A trajectory tracking system, comprising:
the initial camera device is used for acquiring an initial image of the target object;
a plurality of first image pickup devices for acquiring a first video of the target object;
a plurality of second camera devices for acquiring a second video; and
and the processing device is used for tracking the moving route of the target object based on the first video to generate track information, determining a second video of the target object based on the initial image, generating supplementary information according to the second video, and adjusting the track information based on the supplementary information to track the target object.
12. The system of claim 11, wherein the plurality of first camera devices are arranged in predetermined locations according to a first rule, the first camera devices comprising fisheye cameras.
13. The system of claim 11, wherein the plurality of second cameras are arranged in predetermined positions according to a second rule.
14. An electronic device, comprising:
one or more processors;
storage means for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-10.
15. A computer-readable medium, on which a computer program is stored, which, when being executed by a processor, carries out the method according to any one of claims 1-10.
CN202010334554.0A 2020-04-24 2020-04-24 Trajectory tracking method, system, electronic device and computer readable medium Pending CN113628237A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010334554.0A CN113628237A (en) 2020-04-24 2020-04-24 Trajectory tracking method, system, electronic device and computer readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010334554.0A CN113628237A (en) 2020-04-24 2020-04-24 Trajectory tracking method, system, electronic device and computer readable medium

Publications (1)

Publication Number Publication Date
CN113628237A true CN113628237A (en) 2021-11-09

Family

ID=78376269

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010334554.0A Pending CN113628237A (en) 2020-04-24 2020-04-24 Trajectory tracking method, system, electronic device and computer readable medium

Country Status (1)

Country Link
CN (1) CN113628237A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114200934A (en) * 2021-12-06 2022-03-18 北京云迹科技股份有限公司 Robot target following control method and device, electronic equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114200934A (en) * 2021-12-06 2022-03-18 北京云迹科技股份有限公司 Robot target following control method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
US10812761B2 (en) Complex hardware-based system for video surveillance tracking
KR102215041B1 (en) Method and system for tracking an object in a defined area
US9615064B2 (en) Tracking moving objects using a camera network
Luber et al. People tracking in rgb-d data with on-line boosted target models
Liu et al. Measuring indoor occupancy in intelligent buildings using the fusion of vision sensors
CN112041848A (en) People counting and tracking system and method
US20090066513A1 (en) Object detecting device, object detecting method and object detecting computer program
CN111311649A (en) Indoor internet-of-things video tracking method and system
Chen et al. Smart campus care and guiding with dedicated video footprinting through Internet of Things technologies
Zhang et al. Indoor space recognition using deep convolutional neural network: a case study at MIT campus
US20220377285A1 (en) Enhanced video system
CN112381853A (en) Apparatus and method for person detection, tracking and identification using wireless signals and images
CN112399137B (en) Method and device for determining movement track
Belka et al. Integrated visitor support system for tourism industry based on IoT technologies
CN113628237A (en) Trajectory tracking method, system, electronic device and computer readable medium
Bazo et al. Baptizo: A sensor fusion based model for tracking the identity of human poses
Nam et al. Inference topology of distributed camera networks with multiple cameras
Nambiar et al. Person re-identification in frontal gait sequences via histogram of optic flow energy image
De Dominicis et al. Video-based fusion of multiple detectors to counter terrorism
Lee et al. OPTIMUS: Online persistent tracking and identification of many users for smart spaces
Nkengue et al. A Hybrid Indoor Localization Framework in an IoT Ecosystem
Rafiee Improving indoor security surveillance by fusing data from BIM, UWB and Video
Amin et al. The evolution of wi-fi technology in human motion recognition: concepts, techniques and future works
JP7285536B2 (en) Classifier generation device, target person estimation device and their programs, and target person estimation system
JP6112346B2 (en) Information collection system, program, and information collection method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination