CN115424320A - Method and system for displaying movement track of person in three-dimensional scene - Google Patents

Method and system for displaying movement track of person in three-dimensional scene Download PDF

Info

Publication number
CN115424320A
CN115424320A CN202210993493.8A CN202210993493A CN115424320A CN 115424320 A CN115424320 A CN 115424320A CN 202210993493 A CN202210993493 A CN 202210993493A CN 115424320 A CN115424320 A CN 115424320A
Authority
CN
China
Prior art keywords
target
person
camera
dimensional scene
cameras
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210993493.8A
Other languages
Chinese (zh)
Inventor
羡婷
罗昌铭
张骏逸
梁景裕
张旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Younuo Technology Co ltd
Original Assignee
Beijing Younuo Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Younuo Technology Co ltd filed Critical Beijing Younuo Technology Co ltd
Priority to CN202210993493.8A priority Critical patent/CN115424320A/en
Publication of CN115424320A publication Critical patent/CN115424320A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/292Multi-camera tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method and a system for displaying a person movement track in a three-dimensional scene, and relates to the technical field of three-dimensional modeling. The method comprises the following steps: generating a three-dimensional scene of a to-be-detected area, and arranging a plurality of cameras in the to-be-detected area; shooting a person to generate snapshot image information; acquiring the face information of a target person, extracting the facial features of the face information, matching the facial features with the faces in all the generated snapshot image information, and determining all target cameras for detecting the target person and the detection time for detecting the target person according to the matching result; in a three-dimensional scene, a trajectory route of a target person is generated. According to the invention, the trajectory of the personnel in the multi-floor building, the outdoor trajectory, the cross-floor trajectory and the like can be clearly displayed by visually displaying the personnel trajectory data in a three-dimensional scene in a connection mode, and the trajectory can be more visual and clear and the spatial positioning is more accurate through three-dimensional visualization.

Description

Method and system for displaying movement track of person in three-dimensional scene
Technical Field
The invention relates to the technical field of three-dimensional modeling, in particular to a method and a system for displaying a movement track of a person in a three-dimensional scene.
Background
At present, the display mode of people's orbit mostly shows in two-dimensional electronic map, two-dimensional electronic map can only show personnel's two-dimensional movement orbit usually, but when personnel moved in three-dimensional space, the condition that personnel moved in the direction of height is very common, and two-dimensional electronic map can't accurately demonstrate personnel's movement orbit this moment directly perceivedly, for example, to the building that has many floors, be difficult to show personnel's movement orbit between a plurality of floors in the two-dimensional electronic map, lead to personnel's orbit unclear, inaccurate and no sense of space.
Disclosure of Invention
The invention aims to solve the problem that the existing display mode of the personnel track cannot intuitively and accurately display the three-dimensional moving track, and provides a method and a system for displaying the personnel moving track in a three-dimensional scene to solve the technical problem.
In a first aspect, a method for displaying a movement track of a person in a three-dimensional scene is provided, which includes:
generating a three-dimensional scene of the area to be detected, arranging a plurality of cameras in the area to be detected, and detecting personnel in the area to be detected;
when any one camera detects a face, the camera which detects the face shoots people to generate snapshot image information;
acquiring face information of a target person, extracting facial features of the face information, matching the facial features with faces in all generated snapshot image information, and determining all target cameras for detecting the target person and detection time for detecting the target person according to matching results;
and in the three-dimensional scene, according to the detection time and the position of each target camera, connecting all adjacent target cameras through curves to generate a track route of the target personnel.
In a possible implementation of the first aspect, in the three-dimensional scene, according to the detection time and the position of each target camera, all adjacent target cameras are connected by a curve to generate a trajectory route of the target person, specifically including:
judging whether all the target cameras are in a building or not according to the position of each target camera, and judging whether all the target cameras are on the same floor or not when all the target cameras are in the building;
when all the target cameras are on the same floor, in the three-dimensional scene, a display visual angle is positioned on the floor where all the target cameras are located, the detection time and the position of each target camera are processed by using a preset assembly, a connecting line between the target cameras with adjacent time and adjacent positions is established, and the track route of the target person is obtained.
In one possible implementation of the first aspect, the method further includes:
when all the target cameras are not on the same floor, positioning a display visual angle in the building in the three-dimensional scene, expanding all floors in the building, processing the detection time and the position of each target camera by using a preset assembly, and creating a connecting line between the target cameras with adjacent time and adjacent positions to obtain a track route of the target person;
or the like, or, alternatively,
and in the three-dimensional scene, positioning a display visual angle in the building, changing rendering materials of all floors of the building into semitransparent materials, processing the detection time and the position of each target camera by using a preset assembly, and creating a connecting line between the target cameras adjacent in time and position to obtain a track route of the target personnel.
In one possible implementation of the first aspect, the method further comprises:
when all the target cameras are located outdoors, a display visual angle is located outdoors in the three-dimensional scene, the detection time and the position of each target camera are processed by using a preset assembly, a connecting line between the target cameras adjacent in time and position is established, and a track route of the target person is obtained.
In one possible implementation of the first aspect, the method further includes:
when all the target cameras are not in the building, a display visual angle is positioned outdoors in the three-dimensional scene, rendering materials of an outer vertical surface and all floors of the building are changed into semitransparent materials, the detection time and the position of each target camera are processed by a preset assembly, a connecting line between the target cameras adjacent in time and position is established, and a track route of the target person is obtained.
In one possible implementation of the first aspect, the method further includes:
and displaying the name of the position of the corresponding target camera and the time of the target person passing through the position through an information window at each target camera on the track route.
In one possible implementation of the first aspect, the method further includes:
the method comprises the steps of obtaining a video playback instruction, determining a target camera for playback according to the video playback instruction, and playing back the captured image information within a preset time period.
In one possible implementation of the first aspect, the method further includes:
and acquiring a track playback instruction, and playing the animation of the track route in the three-dimensional scene at a first-person visual angle through a navigation grid path-finding algorithm.
In a possible implementation of the first aspect, the obtaining of the face information of the target person specifically includes:
acquiring a camera clicking instruction, determining a clicked camera according to the camera clicking instruction in the three-dimensional scene, and displaying a list of all captured image information acquired by the clicked camera;
acquiring a snapshot information click instruction, and determining clicked snapshot image information according to the snapshot information click instruction in the list of the snapshot image information;
and acquiring a face click instruction, and determining a clicked face according to the face click instruction in the clicked snapshot image information to obtain face information of the target person.
In a second aspect, a system for displaying a movement trajectory of a person in a three-dimensional scene is provided, comprising: set up a plurality of cameras, information processing terminal and the display terminal in waiting to detect the region, wherein:
the information processing terminal is used for generating a three-dimensional scene of the area to be detected;
all the cameras are used for detecting the personnel in the area to be detected;
when any one camera detects a face, the camera which detects the face is used for shooting people to generate snapshot image information;
the information processing terminal is further used for acquiring face information of a target person, extracting facial features of the face information, matching the facial features with faces in all generated snapshot image information, and determining all target cameras for detecting the target person and detection time for detecting the target person according to matching results;
the information processing terminal is further used for connecting all adjacent target cameras through curves in the three-dimensional scene according to the detection time and the position of each target camera to generate a track route of the target person;
the display terminal is used for displaying the track route of the target person in the three-dimensional scene.
In a possible implementation of the second aspect, the information processing terminal is specifically configured to determine whether all the target cameras are in a building according to a position of each target camera, and when all the target cameras are in the building, determine whether all the target cameras are on the same floor;
when all the target cameras are on the same floor, in the three-dimensional scene, a display visual angle is positioned on the floor where all the target cameras are located, the detection time and the position of each target camera are processed by using a preset assembly, a connecting line between the target cameras with adjacent time and adjacent positions is established, and the track route of the target person is obtained.
In a possible implementation of the second aspect, the information processing terminal is further configured to, when all the target cameras are not on the same floor, position a display view angle in the building in the three-dimensional scene, expand all floors in the building, process the detection time and the position of each target camera by using a preset component, and create a connection line between the target cameras adjacent in time and position to obtain a trajectory route of the target person;
or the like, or, alternatively,
in the three-dimensional scene, a display visual angle is positioned in the building, rendering materials of all floors of the building are changed into semitransparent materials, the detection time and the position of each target camera are processed by a preset assembly, a connecting line between adjacent target cameras in time and position is created, and a track route of the target person is obtained.
In a possible implementation of the second aspect, the information processing terminal is further configured to, when all the target cameras are located outdoors, position a display viewing angle outdoors in the three-dimensional scene, process the detection time and the position of each target camera by using a preset component, and create a connection line between the target cameras adjacent in time and position, so as to obtain the trajectory route of the target person.
In a possible implementation of the second aspect, the information processing terminal is further configured to, when all the target cameras are not located in a building, position a display viewing angle outdoors in the three-dimensional scene, change rendering materials of an outer facade and all floors of the building into semitransparent materials, process the detection time and the position of each target camera by using a preset component, and create a connecting line between the target cameras adjacent in time and position to obtain a trajectory route of the target person.
In a possible implementation of the second aspect, the display terminal is further configured to display, at each target camera on the trajectory route, a name of a location where the corresponding target camera is located and a time when the target person passes through the location through an information window.
In a possible implementation of the second aspect, the information processing terminal is further configured to obtain a video playback instruction, determine a target camera for playback according to the video playback instruction, and play back the captured image information within a preset time period through the display terminal.
In a possible implementation of the second aspect, the information processing terminal is further configured to obtain a track playback instruction, and play an animation of the track route in the three-dimensional scene at a first-person perspective through a navigation grid way-finding algorithm.
In a possible implementation of the second aspect, the information processing terminal is specifically configured to obtain a camera click instruction, determine a clicked camera according to the camera click instruction in the three-dimensional scene, and display a list of all captured image information acquired by the clicked camera;
acquiring a snapshot information click instruction, and determining clicked snapshot image information according to the snapshot information click instruction in the list of the snapshot image information;
and acquiring a face click instruction, and determining a clicked face according to the face click instruction in the clicked snapshot image information to obtain face information of the target person.
According to the scheme, the staff trajectory data are visually displayed in a three-dimensional scene in a connection mode, the trajectories of the staff in the multi-floor building, outdoor trajectories, cross-floor trajectories and the like can be clearly displayed, the trajectories can be more visual and clear through three-dimensional visualization, and the space positioning is more accurate.
Advantages of additional aspects of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
Drawings
FIG. 1 is a schematic flow chart of a method for displaying a movement trajectory of a person in a three-dimensional scene according to an embodiment of the present invention;
FIG. 2 is a schematic floor splitting diagram provided by an embodiment of a method for displaying a person movement trajectory in a three-dimensional scene according to the present invention;
fig. 3 is a schematic diagram of a semitransparent material provided by an embodiment of the method for displaying a moving trajectory of a person in a three-dimensional scene according to the present invention.
Detailed Description
The principles and features of the present invention will be described with reference to the following drawings, which are illustrative only and are not intended to limit the scope of the invention.
As shown in fig. 1, a schematic flow chart is provided for an embodiment of a method for displaying a person movement trajectory in a three-dimensional scene, where the method is used for displaying a person movement trajectory in a three-dimensional scene, and includes:
s1, generating a three-dimensional scene of a to-be-detected area, arranging a plurality of cameras in the to-be-detected area, and detecting personnel in the to-be-detected area;
it should be noted that the three-dimensional scene of the area to be detected can be obtained by modeling according to a building drawing, and after modeling, corresponding point swinging operation can be performed in the three-dimensional scene to determine the placement positions and the number of the cameras, wherein the specific number and the positions of the cameras can be set according to the maximum monitorable angle of the cameras, so as to seamlessly cover the area to be detected.
The person in the area to be detected may be subjected to face recognition and detection by an existing face detection algorithm or program, for example, a third-party program may be invoked for detection.
S2, when any one camera detects a face, the camera which detects the face shoots people to generate snapshot image information;
it should be noted that the snapshot image information may include a snapshot face picture, a gender, a similarity, a snapshot time, and the like, and a person to be searched is screened out through the information, so as to request trajectory data of the person in the following.
S3, acquiring the face information of the target person, extracting the facial features of the face information, matching the facial features with the faces in all the generated snapshot image information, and determining all target cameras for detecting the target person and the detection time for detecting the target person according to the matching result;
it should be understood that a matching algorithm may be selected according to actual requirements, for example, similarity matching may be performed between the facial features of the extracted facial information and the facial features in all the pieces of captured image information, and when the similarity is greater than or equal to 72%, it may be determined that the extracted facial features are the same person, and the camera ID information is returned, so as to obtain all pieces of camera information that the person passes through.
And S4, in the three-dimensional scene, connecting all adjacent target cameras through curves according to the detection time and the position of each target camera to generate a track route of the target personnel.
For example, in a three-dimensional scene, every two adjacent points of all the target cameras are connected by a lineenderer component to generate curves, so that a track route is formed.
This embodiment is through carrying out visual show with the line mode with personnel's orbit data in three-dimensional scene, can clearly show the orbit, outdoor orbit and the floor track etc. of crossing of personnel in the building of many floors, can make the orbit more directly perceived clear through three-dimensional visual, and spatial localization is more accurate.
Optionally, in some possible embodiments, in a three-dimensional scene, according to the detection time and the position of each target camera, all adjacent target cameras are connected by a curve to generate a trajectory route of a target person, specifically including:
judging whether all target cameras are in a building or not according to the position of each target camera, and judging whether all the target cameras are on the same floor or not when all the target cameras are in the building;
when all the target cameras are on the same floor, in a three-dimensional scene, the display visual angles are positioned on the floors where all the target cameras are located, the detection time and the positions of all the target cameras are processed by using a preset assembly, a connecting line between the target cameras which are adjacent in time and position is created, and the track route of the target personnel is obtained.
The spatial position where the three-dimensional floor structure passes can be observed through the three-dimensional floor structure in the display mode, such as a room, a corridor and the like, so that the display mode is more accurate and visual and is easier to position.
It should be noted that, since the target person may move back and forth in the area to be detected, it is necessary to ensure that the cameras adjacent to each other in spatial position are time-continuous, i.e., time-adjacent.
Alternatively, a time threshold may be set, which may be determined according to the walking speed of the person and the distance monitored by each camera, for example, assuming that the walking speed of the person is 1m/s and the radius monitored by the camera is 50 meters, the time threshold may be set to 100 seconds, and if the walking speed exceeds the threshold, the time is considered to be non-adjacent.
For example, if 3 cameras a, B and C monitor a target person, the target person passes through the camera a at 7 point 10 minutes, the target person passes through the camera B at 7 point 11 minutes, the target person passes through the camera B at 7 point 12 minutes, the target person passes through the camera C and then returns on the way, the target person passes through the camera C at 7 point 15 minutes, the target person passes through the camera C at 7 point 16 minutes, the target person passes through the camera B at 7 point 17 minutes, the target person passes through the camera a, and it is assumed that the target person does not stay, then although the spatial positions of the camera a and the camera B are adjacent, the target person passes through twice, the interval between the camera a and the camera B which pass through once is 60 seconds and is less than the set time threshold value of 100 seconds, it can be considered that the time is adjacent, then continuity from the camera a to the camera B can be generated, the interval between the camera a camera which passes through once and the camera B which pass through twice is 420 seconds and is more than the set time threshold value of 100 seconds, and then no connection line is generated.
In addition, besides setting a time threshold, when the target person performs reciprocating action, the point positions where the cameras are located can be connected according to the sequence of time.
By the method, the track of the target person can be accurately restored.
Optionally, in some possible embodiments, the method further includes:
when all target cameras are not on the same floor, positioning a display visual angle in a building in a three-dimensional scene, expanding all floors in the building, processing detection time and the position of each target camera by using a preset assembly, and creating a connecting line between the target cameras adjacent in time and position to obtain a track route of a target person;
for example, as shown in fig. 2, an exemplary floor splitting diagram is given, all floors of the building in a three-dimensional space are longitudinally unfolded, a connecting line is created through position information, and trajectory information is displayed, so that the spatial positions where the building passes through, such as floors, rooms, corridors and the like, can be observed through the three-dimensional building structure.
Or the like, or, alternatively,
in a three-dimensional scene, a display visual angle is positioned in a building, rendering materials of all floors of the building are changed into semitransparent materials, a preset assembly is used for processing detection time and the position of each target camera, a connecting line between target cameras adjacent in time and position is established, and a track route of target personnel is obtained.
For example, as shown in fig. 2, an exemplary semitransparent material diagram is given, all floor materials of the building in a three-dimensional space are changed into semitransparent materials, a connecting line is created through position information, trajectory information is displayed, and all trajectories of people in the building can be displayed in an integrated manner in the display mode.
It should be understood that the position of the camera can be judged through the entered monitoring JSON data, for example, all the acquired camera information, and the camera data is classified and stored in the table data according to the floor information according to the entered monitoring JSON data.
And the monitoring JSON data stores ID of monitoring point location, building, floor, room and position information, and if the camera is an outdoor point location, the floor information is stored by using special ID 'world'. If all the camera floor information in the track is 'world', indicating that all the cameras are outdoors, positioning the visual angle to the outdoors; if all the camera floor information in the track is the same floor ID and is indicated to be on the same floor, the visual angle is positioned to the floor; if all the camera floor information in the track is different floor IDs, the floor is crossed, and the visual angle is positioned to the three-dimensional building; and if all the camera floor information in the track is different floor IDs and a special ID 'world' exists, the situation that the outdoor and indoor are crossed is represented, and the visual angle is positioned to the outdoor.
Then, a line can be created through the position information by using the lineenderer assembly, the track time axis is displayed by the flowing effect in the line, a 2D top plate is created above each camera to display information such as the name of the device, the snapshot picture and the snapshot time when the person passes through the camera, so that the behavior track of the person outdoors can be clearly known, the spatial position where the person passes through can be observed through a three-dimensional scene in the display mode, and the display mode is more accurate and visual and is easier to position.
Optionally, in some possible embodiments, the method further includes:
when all the target cameras are located outdoors, the display visual angle is located outdoors in the three-dimensional scene, the detection time and the position of each target camera are processed by using the preset assembly, a connecting line between the target cameras adjacent in time and position is established, and the track route of the target personnel is obtained.
The movement track of the target person outside the building can be displayed more comprehensively and accurately by monitoring the movement track of the target person.
Optionally, in some possible embodiments, the method further includes:
when all the target cameras are not in the building, the display visual angle is positioned outdoors in the three-dimensional scene, rendering materials of the outer vertical surface and all floors of the building are changed into semitransparent materials, the detection time and the position of each target camera are processed by using a preset assembly, a connecting line between the target cameras adjacent in time and position is established, and the track route of target personnel is obtained.
The rendering materials of the outer vertical surface and all floors of the building are changed into the semitransparent materials, so that the moving tracks of the target person on each floor and outdoors can be visually seen, the relevance of the target person in the building and the outdoor action can be better displayed, and the display is clearer and more visual.
Optionally, in some possible embodiments, the method further includes:
and displaying the name of the position where the corresponding target camera is located and the time of the target person passing through the position through an information window at each target camera on the track route.
Through the information window, the passing time of the target person can be displayed more accurately, and a user can analyze the action track of the target person conveniently.
Optionally, in some possible embodiments, the method further includes:
and acquiring a video playback instruction, determining a target camera for playback according to the video playback instruction, and playing back the captured image information within a preset time period.
When the person track is displayed, a certain camera in the track is clicked, the historical playback picture of the person passing through the camera can be requested to a third party through a snapshot time and an rtsp protocol and displayed, and the person track is convenient to track and observe.
Optionally, in some possible embodiments, the method further includes:
and acquiring a track playback instruction, and playing the animation of the track route in the three-dimensional scene at a first-person visual angle through a navigation grid path-finding algorithm.
When the trajectory of a person is displayed, the trajectory playback button can be clicked, the route of a trajectory point in a three-dimensional space is calculated through a Navmesh navigation grid path-finding algorithm, then the camera is switched to a first-person visual angle, first-person roaming is carried out, the historical trajectory is reproduced, more details in a three-dimensional structure can be observed through the display mode, and the immersion feeling is stronger.
Optionally, in some possible embodiments, the obtaining of the face information of the target person specifically includes:
acquiring a camera clicking instruction, determining a clicked camera according to the camera clicking instruction in a three-dimensional scene, and displaying a list of all captured image information acquired by the clicked camera;
acquiring a snapshot information click instruction, and determining clicked snapshot image information according to the snapshot information click instruction in a list of the snapshot image information;
and acquiring a face clicking instruction, and determining a clicked face according to the face clicking instruction in the clicked snapshot image information to obtain face information of the target person.
Specifically, when a certain camera point is clicked in a three-dimensional scene, all snapshot information of the camera can be obtained from the database through the id of the camera, the information is written into a snapshot list to be displayed, the information comprises snapshot face pictures, gender, similarity, snapshot time and the like, and people needing to be searched are screened out through the information so as to request track data of the people in the subsequent process.
And then selecting snapshot information of a person to be inquired in the snapshot list, sending the face picture to a third-party facial feature recognition service through a network protocol, returning all camera information that the person passes through by the third-party facial feature recognition service through the face picture information, wherein the information comprises the name, id and position of a camera, the snapshot picture when the person passes through the camera and the snapshot time when the person passes through the camera, so that the track of the person can be displayed on a connecting line in the following process, and the snapshot information of all the passing cameras can be displayed by creating a 2D top card.
The invention also provides a system for displaying the movement track of the personnel in the three-dimensional scene, which comprises the following components: set up a plurality of cameras, information processing terminal and the display terminal at waiting to detect the region, wherein:
the information processing terminal is used for generating a three-dimensional scene of the area to be detected;
all the cameras are used for detecting the personnel in the area to be detected;
when any one camera detects a face, the camera which detects the face is used for shooting people to generate snapshot image information;
the information processing terminal is also used for acquiring the face information of the target person, extracting the facial features of the face information, matching the facial features with the faces in all the generated snapshot image information, and determining all target cameras for detecting the target person and the detection time for detecting the target person according to the matching result;
the information processing terminal is also used for connecting all adjacent target cameras through curves in a three-dimensional scene according to the detection time and the position of each target camera to generate a track route of a target person;
the display terminal is used for displaying the track route of the target person in the three-dimensional scene.
Optionally, in some possible embodiments, the information processing terminal is specifically configured to determine, according to the position of each target camera, whether all target cameras are in the building, and when all target cameras are in the building, determine whether all target cameras are on the same floor;
when all the target cameras are on the same floor, the display visual angles are positioned on the floors where all the target cameras are located in the three-dimensional scene, the detection time and the positions of all the target cameras are processed by using the preset assembly, a connecting line between the target cameras which are adjacent in time and position is established, and the track route of the target personnel is obtained.
Optionally, in some possible embodiments, the information processing terminal is further configured to, when all the target cameras are not on the same floor, in the three-dimensional scene, position the display view angle in the building, expand all floors in the building, process the detection time and the position of each target camera by using a preset component, and create a connection line between the target cameras that are adjacent in time and position, so as to obtain a trajectory route of the target person;
or the like, or, alternatively,
in a three-dimensional scene, a display visual angle is positioned in a building, rendering materials of all floors of the building are changed into semitransparent materials, a preset assembly is used for processing detection time and the position of each target camera, a connecting line between target cameras adjacent in time and position is established, and a track route of target personnel is obtained.
Optionally, in some possible embodiments, the information processing terminal is further configured to, when all the target cameras are located outdoors, position the display viewing angle outdoors in the three-dimensional scene, process the detection time and the position of each target camera by using a preset component, and create a connection line between the target cameras with adjacent time and adjacent positions to obtain the trajectory route of the target person.
Optionally, in some possible embodiments, the information processing terminal is further configured to, when all the target cameras are not inside the building, in the three-dimensional scene, position the display viewing angle outdoors, change the rendering material of the outer facade and all the floors of the building into a semi-transparent material, process the detection time and the position of each target camera by using a preset component, and create a connecting line between the target cameras that are adjacent in time and position, so as to obtain the trajectory route of the target person.
Optionally, in some possible embodiments, the display terminal is further configured to display, at each target camera on the trajectory route, a name of a location where the corresponding target camera is located and a time when the target person passes through the location through the information window.
Optionally, in some possible embodiments, the information processing terminal is further configured to obtain a video playback instruction, determine a target camera for playback according to the video playback instruction, and play back the captured image information within a preset time period through the display terminal.
Optionally, in some possible embodiments, the information processing terminal is further configured to obtain a track playback instruction, and play an animation of the track route in the three-dimensional scene from the first-person perspective through the navigation grid routing algorithm.
Optionally, in some possible embodiments, the information processing terminal is specifically configured to obtain a camera click instruction, determine a clicked camera according to the camera click instruction in a three-dimensional scene, and display a list of all captured image information acquired by the clicked camera;
acquiring a snapshot information clicking instruction, and determining clicked snapshot image information according to the snapshot information clicking instruction in a list of the snapshot image information;
and acquiring a face clicking instruction, and determining a clicked face according to the face clicking instruction in the clicked snapshot image information to obtain face information of the target person.
It should be understood that the above embodiments are product embodiments corresponding to the previous method embodiments, and the description of the product embodiments may refer to the description of the previous method embodiments, and will not be repeated herein.
It is understood that any combination of the above embodiments can be made by those skilled in the art without departing from the spirit of the present invention and is within the scope of the present invention.
The reader should understand that in the description of the specification, reference to the description of "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described method embodiments are merely illustrative, and for example, the division of steps into only one logical functional division may be implemented in practice in another way, for example, multiple steps may be combined or integrated into another step, or some features may be omitted, or not implemented.
The above method, if implemented in the form of software functional units and sold or used as a stand-alone product, can be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention essentially or partially contributes to the prior art, or all or part of the technical solution can be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
While the invention has been described with reference to specific embodiments, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A method for displaying a movement track of a person in a three-dimensional scene is characterized by comprising the following steps:
generating a three-dimensional scene of a region to be detected, arranging a plurality of cameras in the region to be detected, and detecting personnel in the region to be detected;
when any one camera detects a face, the camera which detects the face shoots people to generate snapshot image information;
acquiring face information of a target person, extracting facial features of the face information, matching the facial features with faces in all generated snapshot image information, and determining all target cameras for detecting the target person and detection time for detecting the target person according to matching results;
and in the three-dimensional scene, according to the detection time and the position of each target camera, connecting all adjacent target cameras through curves to generate a track route of the target personnel.
2. The method according to claim 1, wherein the generating of the trajectory route of the target person by connecting all adjacent target cameras through curves according to the detection time and the position of each target camera in the three-dimensional scene specifically comprises:
judging whether all the target cameras are in a building or not according to the position of each target camera, and judging whether all the target cameras are on the same floor or not when all the target cameras are in the building;
when all the target cameras are on the same floor, in the three-dimensional scene, the display visual angles are positioned on the floors where all the target cameras are located, the detection time and the positions of all the target cameras are processed by using a preset assembly, a connecting line between the target cameras with adjacent time and adjacent positions is created, and the track route of the target person is obtained.
3. The method for displaying the movement track of the person in the three-dimensional scene according to claim 2, further comprising:
when all the target cameras are not on the same floor, positioning a display visual angle in the building in the three-dimensional scene, unfolding all floors in the building, processing the detection time and the position of each target camera by using a preset assembly, and creating a connecting line between the target cameras with adjacent time and adjacent positions to obtain a track route of the target person;
or the like, or, alternatively,
and in the three-dimensional scene, positioning a display visual angle in the building, changing rendering materials of all floors of the building into semitransparent materials, processing the detection time and the position of each target camera by using a preset assembly, and creating a connecting line between the target cameras adjacent in time and position to obtain a track route of the target personnel.
4. The method for displaying the movement locus of the person in the three-dimensional scene according to the claim 2, further comprising:
when all the target cameras are located outdoors, a display visual angle is located outdoors in the three-dimensional scene, the detection time and the position of each target camera are processed by using a preset assembly, a connecting line between the target cameras adjacent in time and position is established, and a track route of the target person is obtained.
5. The method for displaying the movement locus of the person in the three-dimensional scene according to the claim 2, further comprising:
when all the target cameras are not in the building, a display visual angle is positioned outdoors in the three-dimensional scene, rendering materials of an outer vertical surface and all floors of the building are changed into semitransparent materials, the detection time and the position of each target camera are processed by a preset assembly, a connecting line between the target cameras adjacent in time and position is established, and a track route of the target person is obtained.
6. The method for displaying the movement locus of the person in the three-dimensional scene according to the claim 1, further comprising:
and displaying the name of the position of the corresponding target camera and the time of the target person passing through the position through an information window at each target camera on the track route.
7. The method for displaying the movement track of the person in the three-dimensional scene according to claim 1, further comprising:
the method comprises the steps of obtaining a video playback instruction, determining a target camera for playback according to the video playback instruction, and playing back the captured image information within a preset time period.
8. The method for displaying the movement locus of the person in the three-dimensional scene according to the claim 1, further comprising:
and acquiring a track playback instruction, and playing the animation of the track route in the three-dimensional scene at a first-person visual angle through a navigation grid path-finding algorithm.
9. The method for displaying the movement trajectory of the person in the three-dimensional scene according to claim 1, wherein the obtaining of the face information of the target person specifically includes:
acquiring a camera clicking instruction, determining a clicked camera according to the camera clicking instruction in the three-dimensional scene, and displaying a list of all captured image information acquired by the clicked camera;
acquiring a snapshot information click instruction, and determining clicked snapshot image information according to the snapshot information click instruction in the list of the snapshot image information;
and acquiring a face click instruction, and determining a clicked face according to the face click instruction in the clicked snapshot image information to obtain face information of the target person.
10. A system for displaying a trajectory of a person moving in a three-dimensional scene, comprising: set up a plurality of cameras, information processing terminal and the display terminal in waiting to detect the region, wherein:
the information processing terminal is used for generating a three-dimensional scene of the area to be detected;
all the cameras are used for detecting the personnel in the area to be detected;
when any one camera detects a face, the camera which detects the face is used for shooting people to generate snapshot image information;
the information processing terminal is further used for acquiring the face information of a target person, extracting the facial features of the face information, matching the facial features with the faces in all the generated snapshot image information, and determining all target cameras for detecting the target person and the detection time for detecting the target person according to the matching result;
the information processing terminal is further used for connecting all adjacent target cameras through curves in the three-dimensional scene according to the detection time and the position of each target camera to generate a track route of the target person;
the display terminal is used for displaying the track route of the target person in the three-dimensional scene.
CN202210993493.8A 2022-08-18 2022-08-18 Method and system for displaying movement track of person in three-dimensional scene Pending CN115424320A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210993493.8A CN115424320A (en) 2022-08-18 2022-08-18 Method and system for displaying movement track of person in three-dimensional scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210993493.8A CN115424320A (en) 2022-08-18 2022-08-18 Method and system for displaying movement track of person in three-dimensional scene

Publications (1)

Publication Number Publication Date
CN115424320A true CN115424320A (en) 2022-12-02

Family

ID=84197780

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210993493.8A Pending CN115424320A (en) 2022-08-18 2022-08-18 Method and system for displaying movement track of person in three-dimensional scene

Country Status (1)

Country Link
CN (1) CN115424320A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117636265A (en) * 2024-01-25 2024-03-01 南昌大学第一附属医院 Patient positioning method and system for medical scene

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117636265A (en) * 2024-01-25 2024-03-01 南昌大学第一附属医院 Patient positioning method and system for medical scene
CN117636265B (en) * 2024-01-25 2024-07-12 南昌大学第一附属医院 Patient positioning method and system for medical scene

Similar Documents

Publication Publication Date Title
Fan et al. Heterogeneous information fusion and visualization for a large-scale intelligent video surveillance system
JP4829290B2 (en) Intelligent camera selection and target tracking
US7522186B2 (en) Method and apparatus for providing immersive surveillance
US8854469B2 (en) Method and apparatus for tracking persons and locations using multiple cameras
JP5200446B2 (en) Interface device providing display of video from a plurality of cameras in a fixed position, method and program providing video interface and display
WO2003100726A1 (en) Security camera system for tracking moving objects in both forward and reverse directions
CN114529694A (en) Automatic determination of image capture locations in a building interior using determined room shapes
EP2932708A1 (en) Mobile augmented reality for managing enclosed areas
WO2014182898A1 (en) User interface for effective video surveillance
JP4722537B2 (en) Monitoring device
Wang et al. Contextualized videos: Combining videos with environment models to support situational understanding
CN111222190A (en) Ancient building management system
US11651667B2 (en) System and method for displaying moving objects on terrain map
CN108431871A (en) The method that object is shown on threedimensional model
CN115424320A (en) Method and system for displaying movement track of person in three-dimensional scene
CN111429194B (en) User track determination system, method, device and server
Chao et al. Augmented 3-D keyframe extraction for surveillance videos
CN112464757A (en) High-definition video-based target real-time positioning and track reconstruction method
CN113066182A (en) Information display method and device, electronic equipment and storage medium
Doubek et al. Cinematographic rules applied to a camera network
US9767564B2 (en) Monitoring of object impressions and viewing patterns
CN111369668A (en) Method for automatically drawing 3D model
JP5962383B2 (en) Image display system and image processing apparatus
Kahn et al. Capturing of contemporary dance for preservation and presentation of choreographies in online scores
JP2017028688A (en) Image managing device, image managing method and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination