CN113643320A - Image processing method and device, electronic equipment and computer readable storage medium - Google Patents

Image processing method and device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN113643320A
CN113643320A CN202110728309.2A CN202110728309A CN113643320A CN 113643320 A CN113643320 A CN 113643320A CN 202110728309 A CN202110728309 A CN 202110728309A CN 113643320 A CN113643320 A CN 113643320A
Authority
CN
China
Prior art keywords
track
track point
trajectory
target object
point data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202110728309.2A
Other languages
Chinese (zh)
Inventor
干刚
谭志颖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xi'an Shangtang Intelligent Technology Co ltd
Original Assignee
Xi'an Shangtang Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xi'an Shangtang Intelligent Technology Co ltd filed Critical Xi'an Shangtang Intelligent Technology Co ltd
Priority to CN202110728309.2A priority Critical patent/CN113643320A/en
Publication of CN113643320A publication Critical patent/CN113643320A/en
Priority to PCT/CN2021/134883 priority patent/WO2023273154A1/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses an image processing method and device, electronic equipment and a computer readable storage medium. The method comprises the following steps: acquiring a first three-dimensional model of a target scene and one or more than one first track point data of a target object in the target scene; displaying a trajectory of the target object within the first three-dimensional model in accordance with the one or more first trajectory point data.

Description

Image processing method and device, electronic equipment and computer readable storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method and apparatus, an electronic device, and a computer-readable storage medium.
Background
In the security field, a trajectory of a target object is generally displayed, and a relevant person can further analyze information such as the trajectory of the target object by observing the trajectory of the target object. Therefore, how to improve the display effect of the track of the target object has very important significance.
Disclosure of Invention
The application provides an image processing method and device, an electronic device and a computer readable storage medium.
In a first aspect, an image processing method is provided, the method comprising:
acquiring a first three-dimensional model of a target scene and one or more than one first track point data of a target object in the target scene;
displaying a trajectory of the target object within the first three-dimensional model in accordance with the one or more first trajectory point data.
In combination with any embodiment of the present application, the displaying the trajectory of the target object within the first three-dimensional model according to the one or more first trajectory point data includes:
determining one or more first track points of the target object according to the one or more first track point data;
determining a first three-dimensional region from the first three-dimensional model that includes the one or more first trajectory points;
and displaying the first three-dimensional area in a preset display mode in the first three-dimensional model, and displaying the track of the target object in the first three-dimensional area.
In combination with any embodiment of the present application, the determining a first three-dimensional region including the one or more first track points from the first three-dimensional model includes:
acquiring the size of the first three-dimensional region and the shape of the first three-dimensional region;
determining a centroid of the one or more first trace points;
the center of mass is used as the center of the first three-dimensional area, and the first three-dimensional area is determined from the first three-dimensional model according to the size of the first three-dimensional area and the shape of the first three-dimensional area.
In combination with any embodiment of the present application, the one or more first track points include a second track point, a third track point, a fourth track point, and one or more fifth track points, and a timestamp of the second track point, a timestamp of the third track point, a timestamp of the fourth track point, and a timestamp of the one or more fifth track points decrease in sequence;
the determining the centroid of the one or more first trace points comprises:
under the condition that the third track point is located in a first to-be-confirmed area, determining a centroid of the first to-be-confirmed area as a centroid of the one or more first track points, wherein the first to-be-confirmed area is a polygonal area, and vertexes of the first to-be-confirmed area are the second track point, the fourth track point and the one or more fifth track points;
third track point is located under the condition that first treat affirmation the region, confirms that the second treats affirmation regional barycenter as the barycenter of one or more first track points, first treat affirmation regional with the second is treated affirmation region and is the polygon region, first treat affirmation regional summit do the second track point fourth track point one or more fifth track points, the second treat affirmation regional summit do the second track point third track point fourth track point one or more fifth track points.
With reference to any one of the embodiments of the present application, the determining one or more first trajectory points of the target object according to the one or more first trajectory point data includes:
determining n track point data with the largest timestamp in the one or more first track point data as one or more effective track point data;
and determining one or more first track points of the target object according to the one or more effective track point data.
With reference to any embodiment of the present application, in a case that the number of the first trajectory point data is greater than 1, the obtaining one or more first trajectory point data of the target object in the target scene includes:
acquiring two or more second track point data and a distance threshold of the target object in the target scene;
determining the distance between adjacent track point data sets, wherein the adjacent track point data sets comprise two second track point data with adjacent timestamps;
under the condition that the distance is smaller than the distance threshold value, removing old track point data from the two or more second track point data to obtain the one or more first track point data, wherein the old track point data is the second track point data with the smallest timestamp in the adjacent track point data set;
and taking the two or more second track point data as the more than one first track point data under the condition that the distance is smaller than the distance threshold value.
In combination with any one of the embodiments of the present application, the track of the target object includes a first track and a second track, the first track is a track between a track point with a timestamp of mth and a track point with a timestamp of the mth, the second track is the track of the target object except the first track, and the display mode of the first track is different from the display mode of the second track.
With reference to any one of the embodiments of the present application, the first three-dimensional model includes a first floor and a second floor, the first floor and the second floor have different heights, and the trajectory of the target object is located in the first floor, and the method further includes:
displaying the second floor within the first three-dimensional model if it is determined that the target object is present at the second floor.
With reference to any one of the embodiments of the present application, in the determining that the target object appears in front of the second floor, the method further includes:
acquiring a first image containing the target object;
the determining that the target object is present at the second floor comprises:
and determining that the second image contains the target object by comparing the first image with the second image, wherein the second image is acquired by a camera of the second floor.
In a second aspect, there is provided an image processing apparatus, the apparatus comprising:
the system comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring a first three-dimensional model of a target scene and one or more first track point data of a target object in the target scene;
a processing unit for displaying the trajectory of the target object within the first three-dimensional model according to the one or more first trajectory point data.
In combination with any embodiment of the present application, the processing unit is configured to:
determining one or more first track points of the target object according to the one or more first track point data;
determining a first three-dimensional region from the first three-dimensional model that includes the one or more first trajectory points;
and displaying the first three-dimensional area in a preset display mode in the first three-dimensional model, and displaying the track of the target object in the first three-dimensional area.
In combination with any embodiment of the present application, the processing unit is configured to:
acquiring the size of the first three-dimensional region and the shape of the first three-dimensional region;
determining a centroid of the one or more first trace points;
the center of mass is used as the center of the first three-dimensional area, and the first three-dimensional area is determined from the first three-dimensional model according to the size of the first three-dimensional area and the shape of the first three-dimensional area.
In combination with any embodiment of the present application, the one or more first track points include a second track point, a third track point, a fourth track point, and one or more fifth track points, and a timestamp of the second track point, a timestamp of the third track point, a timestamp of the fourth track point, and a timestamp of the one or more fifth track points decrease in sequence;
the processing unit is configured to:
under the condition that the third track point is located in a first to-be-confirmed area, determining a centroid of the first to-be-confirmed area as a centroid of the one or more first track points, wherein the first to-be-confirmed area is a polygonal area, and vertexes of the first to-be-confirmed area are the second track point, the fourth track point and the one or more fifth track points;
third track point is located under the condition that first treat affirmation the region, confirms that the second treats affirmation regional barycenter as the barycenter of one or more first track points, first treat affirmation regional with the second is treated affirmation region and is the polygon region, first treat affirmation regional summit do the second track point fourth track point one or more fifth track points, the second treat affirmation regional summit do the second track point third track point fourth track point one or more fifth track points.
In combination with any embodiment of the present application, the processing unit is configured to:
determining n track point data with the largest timestamp in the one or more first track point data as one or more effective track point data;
and determining one or more first track points of the target object according to the one or more effective track point data.
With reference to any embodiment of the present application, in a case that the number of the first trajectory point data is greater than 1, the obtaining unit is configured to:
acquiring two or more second track point data and a distance threshold of the target object in the target scene;
determining the distance between adjacent track point data sets, wherein the adjacent track point data sets comprise two second track point data with adjacent timestamps;
under the condition that the distance is smaller than the distance threshold value, removing old track point data from the two or more second track point data to obtain the one or more first track point data, wherein the old track point data is the second track point data with the smallest timestamp in the adjacent track point data set;
and taking the two or more second track point data as the more than one first track point data under the condition that the distance is smaller than the distance threshold value.
In combination with any one of the embodiments of the present application, the track of the target object includes a first track and a second track, the first track is a track between a track point with a timestamp of mth and a track point with a timestamp of the mth, the second track is the track of the target object except the first track, and the display mode of the first track is different from the display mode of the second track.
With reference to any one of the embodiments of the present application, the first three-dimensional model includes a first floor and a second floor, the first floor and the second floor have different heights, and the trajectory of the target object is located in the first floor, and the method further includes:
displaying the second floor within the first three-dimensional model if it is determined that the target object is present at the second floor.
With reference to any one of the embodiments of the present application, in the determining that the target object appears in front of the second floor, the method further includes:
acquiring a first image containing the target object;
the determining that the target object is present at the second floor comprises:
and determining that the second image contains the target object by comparing the first image with the second image, wherein the second image is acquired by a camera of the second floor.
In a third aspect, an electronic device is provided, which includes: a processor and a memory for storing computer program code comprising computer instructions, the electronic device performing the method of the first aspect and any one of its possible implementations as described above, if the processor executes the computer instructions.
In a fourth aspect, another electronic device is provided, including: a processor, transmitting means, input means, output means, and a memory for storing computer program code comprising computer instructions, which, when executed by the processor, cause the electronic device to perform the method of the first aspect and any one of its possible implementations.
In a fifth aspect, there is provided a computer-readable storage medium having stored therein a computer program comprising program instructions which, if executed by a processor, cause the processor to perform the method of the first aspect and any one of its possible implementations.
A sixth aspect provides a computer program product comprising a computer program or instructions which, when run on a computer, causes the computer to perform the method of the first aspect and any of its possible implementations.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments or the background art of the present application, the drawings required to be used in the embodiments or the background art of the present application will be described below.
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and, together with the description, serve to explain the principles of the application.
Fig. 1 is a schematic flowchart of an image processing method according to an embodiment of the present application;
fig. 2 is a schematic diagram of a trace point region of a target object according to an embodiment of the present disclosure;
fig. 3 is a schematic diagram of a trace point region of a target object according to an embodiment of the present disclosure;
fig. 4 is a schematic diagram of a trace point region of a target object according to an embodiment of the present disclosure;
fig. 5 is a schematic diagram of a track between track points provided in an embodiment of the present application;
fig. 6 is a schematic diagram of a track between track points according to another embodiment of the present disclosure;
fig. 7 is a schematic diagram of a track between still another track points provided in this embodiment of the present application;
fig. 8 is a schematic diagram of a track between still another track points provided in this embodiment of the present application;
FIG. 9 is a schematic diagram of a first floor being displayed in a first three-dimensional model according to an embodiment of the present application;
FIG. 10 is a schematic diagram of a second floor displayed in a first three-dimensional model according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
fig. 12 is a schematic diagram of a hardware structure of an image processing apparatus according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," and the like in the description and claims of the present application and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
It should be understood that in the present application, "at least one" means one or more, "a plurality" means two or more, "at least two" means two or three and three or more, "and/or" for describing an association relationship of associated objects, meaning that three relationships may exist, for example, "a and/or B" may mean: only A, only B and both A and B are present, wherein A and B may be singular or plural. The character "/" may indicate that the objects associated with each other are in an "or" relationship, meaning any combination of the items, including single item(s) or multiple items. For example, at least one (one) of a, b, or c, may represent: a, b, c, "a and b", "a and c", "b and c", or "a and b and c", wherein a, b, c may be single or plural. The character "/" may also represent a division in a mathematical operation, e.g., a/b-a divided by b; 6/3 ═ 2. At least one of the following "or similar expressions.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
The execution subject of the embodiment of the present application is an image processing apparatus, where the image processing apparatus may be any electronic device that can execute the technical solution disclosed in the embodiment of the present application. Optionally, the image processing apparatus may be one of the following: cell-phone, computer, panel computer, wearable smart machine.
It should be understood that the method embodiments of the present application may also be implemented by means of a processor executing computer program code. The embodiments of the present application will be described below with reference to the drawings. Referring to fig. 1, fig. 1 is a schematic flowchart of an image processing method according to an embodiment of the present disclosure.
101. The method comprises the steps of obtaining a first three-dimensional model of a target scene and one or more than one first track point data of a target object in the target scene.
In the embodiment of the present application, the target scene may be any scene. For example, the target scene is inside a building; as another example, the target scene is an underground parking lot; for another example, the target scene is a campus; as another example, the target scene is a scene within a mall.
In the embodiment of the present application, the three-dimensional model (including the first three-dimensional model and the second three-dimensional model to be mentioned later) may be a Computer Aided Design (CAD) three-dimensional model, the three-dimensional model may be a three-dimensional convex hull, and the three-dimensional model may be a three-dimensional point cloud.
In the embodiment of the present application, the target object may be any object. In one possible implementation, the target object includes one of: people, vehicles, robots.
In this embodiment of the application, the number of the first trajectory point data may be 1, and may also be greater than 1. The first trajectory point data includes a location and a timestamp. The position of the first trajectory point data is a position under the coordinate system of the first three-dimensional model, that is, the position of the first trajectory point data is a three-dimensional coordinate.
For example, the one or more first trajectory point data of the target object includes first trajectory point data a. If the first trajectory point data a includes point a and the timestamp t 1. From the first trajectory point data, it may be determined that the target object appears at point a at t 1.
Track point data
In one implementation of obtaining a first three-dimensional model of a target scene, an image processing apparatus takes as the first three-dimensional model a three-dimensional model of the target scene input by a user through an input component. The above-mentioned input assembly includes: keyboard, mouse, touch screen, touch pad and audio input device.
In another implementation of obtaining the first three-dimensional model of the target scene, the image processing apparatus receives the three-dimensional model of the target scene sent by the terminal as the first three-dimensional model. The terminal may be any one of the following: cell-phone, computer, panel computer, server.
In yet another implementation of acquiring the first three-dimensional model of the target scene, the image processing device includes a lidar. The image processing device scans a target scene through the laser radar to obtain a three-dimensional model of the target scene as a first three-dimensional model.
In one implementation of obtaining one or more first trajectory point data of a target object within a target scene, an image processing apparatus receives trajectory point data of the target object within the target scene input by a user through an input component.
In another implementation manner of obtaining one or more first track point data of a target object in a target scene, an image processing device receives the one or more first track point data of the target object in the target scene, wherein the track point data is sent by a terminal.
In yet another implementation of obtaining one or more first trajectory point data of the target object within the target scene, the image processing apparatus obtains one or more trajectory point data to be converted and a coordinate conversion relationship of the target object in a coordinate system of a global positioning system, where the coordinate conversion relationship is a conversion relationship between the coordinate system of the global positioning system and a coordinate system of the first three-dimensional model. The image processing device converts one or more track point data to be converted of the target object in a global positioning system coordinate system into one or more first track point data of the target object in the first three-dimensional model according to the coordinate conversion relation, and one or more first track point data of the target object in a target scene are obtained.
It should be understood that, in the embodiment of the present application, the obtaining of the first three-dimensional model of the target scene and the obtaining of the one or more first trajectory point data of the target object within the target scene may be performed separately or simultaneously.
102. And displaying the track of the target object in the first three-dimensional model according to the one or more first track point data.
And the image processing device sequentially connects the first track point data of the target object in the target scene according to the size of the time stamp to obtain the track of the target object in the first three-dimensional model, and displays the track in the first three-dimensional model.
For example, the first trajectory point data of the target object includes: track point data a, track point data B, track point data C, wherein, track point data a includes position A and timestamp t1, and track point data B includes position B and timestamp t2, and track point data C includes position C and timestamp t 3.
If t1 is earlier than t2 and t2 is earlier than t3, then the trajectory of the target object within the first three-dimensional model is such that the target object appears at position A at t1, at position B at t2, and at position C at t 3.
In the embodiment of the application, the image processing device displays the track of the target object in the first three-dimensional model according to one or more first track point data of the target object in the target scene, so that the track of the target object in the target scene can be more intuitively displayed.
As an alternative embodiment, the image processing apparatus executes the following steps in executing step 102:
1. and determining one or more first track points of the target object according to the one or more first track point data.
In the embodiment of the application, the track points are points where the target object appears in the target scene, and the track points have timestamps. The image processing apparatus may determine the position of the track point from the position in the first track point data, and determine the time stamp of the track point from the time stamp in the first track point data. For example, the first trajectory point data a includes point a and a time stamp t 1. At this time, the track point determined according to the first track point data a is point a, and the timestamp of the track point is t 1.
The image processing device can obtain a first track point according to first track point data, and can obtain one or more first track points according to one or more first track point data.
2. A first three-dimensional region containing the one or more first trajectory points is determined from the first three-dimensional model.
Since the area of the target scene may be larger, the area where the target object appears may be highlighted in order to better display the trajectory of the target object within the target scene. Thus, prior to displaying the trajectory of the target object within the first three-dimensional model, the region in which the target object appears may be determined from the first three-dimensional model.
In this step, the image processing apparatus takes a region including one or more first locus points of the target object as a region where the target object appears, and determines a region including one or more first locus points of the target object from the first three-dimensional model, to obtain a first three-dimensional region.
3. And displaying the first three-dimensional area in a predetermined display mode in the first three-dimensional model, and displaying the track of the target object in the first three-dimensional area.
In the embodiment of the present application, the predetermined display mode is a display mode for highlighting the first three-dimensional region. The first three-dimensional region is highlighted in the first three-dimensional model in a manner that the first three-dimensional region is displayed differently from a manner in which the non-first three-dimensional region is displayed, wherein the non-first three-dimensional region includes regions of the first three-dimensional model other than the first three-dimensional region. Optionally, the predetermined display mode includes one or more of the following: highlighting color, highlighting, and floating display.
In one possible implementation, the predetermined display mode includes color highlighting. The image processing device converts the non-first three-dimensional area into a gray image and reserves the color of the first three-dimensional area so as to realize the highlighting of the first three-dimensional area. The display effect of highlighting the first three-dimensional area in the first three-dimensional model is that the first three-dimensional area is a color image, and the non-first three-dimensional area is a black-and-white image.
In another possible implementation, the predetermined display mode includes highlighting. The image processing apparatus highlights the first three-dimensional region to achieve highlighting of the first three-dimensional region.
In yet another possible implementation, the predetermined display mode includes a floating display. The first three-dimensional model includes a Head Up Display (HUD) layer, and the image processing device determines a display area corresponding to the first three-dimensional area from the HUD layer as a floating display area. The image processing apparatus displays the first three-dimensional region in the floating display region.
The image processing apparatus displays the trajectory of the target object within the first three-dimensional region while highlighting the first three-dimensional region, thereby enhancing the display effect of the trajectory of the target object.
In one possible implementation scenario, the image processing apparatus may implement follow-up display of the target object by performing steps 1 to 3.
For example, the target object is zhang san and the target scene is a mall. The image processing device can highlight the track of the Zhang III in the mall according to the real-time position of the Zhang III in the mall by executing the steps 1 to 3, so that the effect of performing follow-up display on the track of the Zhang III in the mall is achieved.
As an alternative embodiment, the image processing apparatus performs the following steps in the process of performing step 2:
4. the size of the first three-dimensional region and the shape of the first three-dimensional region are obtained.
In an embodiment of the present application, the size of the first three-dimensional region is used to determine the area of the first three-dimensional region.
For example, the shape of the first three-dimensional region is a rectangle, and the size of the first three-dimensional region is 100 pixel units long and 50 pixel units wide, in which case the first three-dimensional region is a region surrounded by a rectangle 100 pixel units long and 50 pixel units wide.
For another example, the first three-dimensional region is a circle, and the size of the first three-dimensional region is 50 pixel units in radius, and in this case, the first three-dimensional region is a region surrounded by a circle having a radius of 50 pixel units.
For another example, the first three-dimensional region has an isosceles trapezoid shape, and the first three-dimensional region has a size of 30 pixel units higher by 50 pixel units on the upper base, 80 pixel units on the lower base, and in this case, the first three-dimensional region is a region surrounded by an isosceles trapezoid shape having 50 pixel units on the upper base, 80 pixel units on the lower base, and 30 pixel units higher by the upper base.
In one implementation of obtaining the size of the first three-dimensional region, the image processing apparatus takes the size of the first three-dimensional region input by the user through the input component as the size of the first three-dimensional region.
In another implementation manner of acquiring the size of the first three-dimensional region, the image processing apparatus receives the size of the first three-dimensional region transmitted by the terminal as the size of the first three-dimensional region.
In one implementation of obtaining the shape of the first three-dimensional region, the image processing apparatus takes the shape of the first three-dimensional region input by the user through the input component as the shape of the first three-dimensional region.
In another implementation of acquiring the shape of the first three-dimensional region, the image processing apparatus receives the shape of the first three-dimensional region transmitted by the terminal as the shape of the first three-dimensional region.
5. The centroid of one or more first trajectory points of the target object is determined.
Since the distribution of the locus points of the target object may be uneven, determining the centroid of the locus points of the target object may determine the concentration region of the locus points of the target object.
For example, the track point region of the target object shown in fig. 2 is the same as the track point region of the target object shown in fig. 3, and the number of track points in fig. 2 is the same as the number of track points in fig. 3. However, since the distribution of the trace points in fig. 2 is different from that in fig. 3, the positions of the centroids of the trace points are different in fig. 2 and 3.
In the track area of the target object shown in fig. 2, the track points of the target object are distributed unevenly, but the centroid is located in the concentrated area of the track points, that is, the centroid of the track points of the target object can determine the concentrated area of the track points of the target object.
In the embodiment of the application, the image processing device determines the centroid of the track point of the target object, and determines the coordinate of the centroid of the track point of the target object under the coordinate system of the first three-dimensional model.
In one possible implementation manner, the image processing apparatus determines a polygon enclosing the trajectory points of the target object according to the trajectory points of the target object, wherein vertices of the polygon are the trajectory points of the target object. The image processing device determines the center of mass of the polygon as the center of mass of the trajectory point of the target object.
6. The centroid is set as the center of the first three-dimensional region, and the first three-dimensional region is determined from the first three-dimensional model as the first three-dimensional region based on the size of the first three-dimensional region and the shape of the first three-dimensional region.
In an embodiment of the present application, the center of the first three-dimensional region may be a geometric center of the first three-dimensional region. Since the area of the first three-dimensional region is determined by the size of the first three-dimensional region, in the case where the size of the first three-dimensional region is limited, the area of the first three-dimensional region is also limited. In the case that the area of the first three-dimensional region is limited, the image processing device uses the center of mass as the center of the first three-dimensional region, so that the first three-dimensional region can contain more track points.
For example, in the track point region of the target object shown in fig. 4, the centroid of the track point is located in the concentrated region of the track point of the target object, and the image processing apparatus uses the centroid as the center of the first three-dimensional region, so that the first three-dimensional region is closer to the concentrated region of the track point of the target object, and the first three-dimensional region contains more track points.
Specifically, the first three-dimensional region is assumed to be rectangular in shape. In fig. 4, if the centroid is used as the center of the first three-dimensional region, the centroid region can be determined according to the area of the first three-dimensional region, and at this time, the centroid region includes 7 track points (i.e., track point a, track point B, track point C, track point D, track point E, track point F, and track point G). If the center of the track point region of the target object is taken as the center of the first three-dimensional region, the center region can be determined according to the area of the first three-dimensional region, and the center region includes 1 track point (i.e., track point F).
That is, in the case where the area of the first three-dimensional region is determined, the center of mass is taken as the center of the first three-dimensional region containing 7 track points, and the center of the track point region of the target object is taken as the center of the first three-dimensional region containing 1 track point. Obviously, the center of mass is taken as the center of the first three-dimensional area, so that the first three-dimensional area contains more track points.
After determining the center of the first three-dimensional region, the image processing apparatus may determine the first three-dimensional region from the first three-dimensional model in accordance with the center, the size of the first three-dimensional region, and the shape of the first three-dimensional region.
The image processing device further takes the first three-dimensional area as the first three-dimensional area, so that the first three-dimensional area can contain more track points of the target object. In this way, the image processing apparatus displays the trajectory of the target object while displaying the first three-dimensional region, and the display effect can be improved.
As an optional implementation manner, the one or more first track points include a second track point, a third track point, a fourth track point, and one or more fifth track points.
In this embodiment, the timestamp of the second track point, the timestamp of the third track point, the timestamp of the fourth track point, and the timestamp of one or more fifth track points decrease in sequence.
For example, the one or more first trace points include: track point an, track point b, track point c, track point d, wherein, the timestamp of track point a is earlier than the timestamp of track point b, and the timestamp of track point b is earlier than the timestamp of track point c, and the timestamp of track point d is earlier than the timestamp of track point c. At this time, the track point d is a second track point, the track point c is a third track point, the track point b is a fourth track point, and the track point a is a fifth track point, i.e., the number of the fifth track points is 1.
For example, the one or more first trace points include: track point an, track point b, track point c, track point d, track point e, wherein, track point a's timestamp is earlier than track point b's timestamp, track point b's timestamp is earlier than track point c's timestamp, track point d's timestamp is earlier than track point c's timestamp, track point e's timestamp is earlier than track point c's timestamp. At the moment, the track point e is a second track point, the track point d is a third track point, the track point c is a fourth track point, the track point a and the track point b are fifth track points, and the number of the fifth track points is 2.
In this embodiment, the image processing apparatus performs the following steps in performing step 5:
7. under the condition that above-mentioned third track point is located first area of awaiting confirming, confirm above-mentioned first area of awaiting confirming's barycenter and regard as the barycenter of above-mentioned one or more first track points, above-mentioned first area of awaiting confirming is the polygon region, and the summit of above-mentioned first area of awaiting confirming is above-mentioned second track point, above-mentioned fourth track point, above-mentioned one or more fifth track points.
Since one or more first trajectory points are discontinuous in the time dimension, there is an error between the trajectory obtained from the one or more first trajectory points and the actual trajectory of the target object. For example, as shown in fig. 5, it is assumed that the real trajectory of the target object moving from the trajectory point a to the trajectory point B is a curve AB, but in the case where the image processing apparatus determines the position of the trajectory point a and the position of the trajectory point B, the trajectory obtained from the trajectory point a and the trajectory point B is a line segment AB.
Therefore, in order to improve the display accuracy of the trajectory of the target object by making one or more first trajectory point regions determined from one or more first trajectory points contain the actual trajectories of more target objects, the convex hull containing one or more first trajectory points may be used as the one or more first trajectory point regions.
For example, in fig. 6, trace point a, trace point B, trace point C, trace point D and trace point E are one or more first trace points, wherein the timestamp of trace point a is earlier than the timestamp of trace point B, the timestamp of trace point B is earlier than the timestamp of trace point C, the timestamp of trace point C is earlier than the timestamp of trace point D, and the timestamp of trace point D is earlier than the timestamp of trace point E. The curve CD is the real trajectory of the target object moving from the trajectory point C to the trajectory point D.
If the track points are connected in sequence from small to large according to the timestamps of the track points, the obtained track point area is a polygon ABCDE. At this time, the curve CD is located outside the track point region, and if the centroid of the track point region is taken as the centroid of the one or more first track points, the accuracy of the centroid of the one or more first track points is obviously reduced.
If a convex hull containing one or more first trajectory points is determined based on one or more first trajectory points, a trajectory point area ABCE as shown in fig. 7 can be obtained. At this time, the track point area contains the curve CD, and if the centroid of the track point area is taken as the centroid of the one or more first track points, the accuracy of the centroid of the one or more first track points can be improved.
In this embodiment of the application, the third trace point is located in the first to-be-confirmed area, which indicates that the first to-be-confirmed area is a convex hull area including one or more first trace points. At this time, the image processing apparatus uses the first to-be-confirmed region as the trace region of the target object in the first three-dimensional model, and can improve the accuracy of the trace region and further improve the display accuracy of the trajectory of the target object. The image processing device further uses the center of mass of the first to-be-confirmed area as the center of mass of the one or more first track points, so that the accuracy of the center of mass of the one or more first track points can be improved.
For example, in fig. 7, trace point a and trace point B are both fifth trace points, trace point C is a fourth trace point, trace point D is a third trace point, and trace point E is a second trace point. The first to-be-confirmed area is an area surrounded by the polygon ABCE. Since the trajectory point D is located within the polygon ABCE, the image processing apparatus takes the centroid of the polygon ABCE as the centroid of the one or more first trajectory points.
For another example, in fig. 8, trace point a is a fifth trace point, trace point B is a fourth trace point, trace point C is a third trace point, and trace point D is a second trace point. The first area to be confirmed is an area surrounded by the triangle ABD. Since the trajectory point C is located within the triangle ABD, the image processing device takes the centroid of the triangle ABD as the centroid of one or more first trajectory points.
8. Under the condition that above-mentioned third track point is located first waiting to confirm the region, confirm that the second waits to confirm regional barycenter as the barycenter of above-mentioned one or more first track points, above-mentioned first waiting to confirm region and above-mentioned second are polygonal region, above-mentioned first waiting to confirm regional summit is above-mentioned second track point, above-mentioned fourth track point, above-mentioned one or more fifth track points, above-mentioned second waiting to confirm regional summit is above-mentioned second track point, above-mentioned third track point, above-mentioned fourth track point, above-mentioned one or more fifth track points.
The third track point is located outside the first to-be-confirmed area, which indicates that the track point is located outside the first to-be-confirmed area, and at this time, if the first to-be-confirmed area is used as a target object, a track area in the first three-dimensional model has a large error. Therefore, in order to reduce the error, the image processing device takes the second area to be confirmed as the track area of the target object in the first three-dimensional model, and further takes the center of mass of the second area to be confirmed as the center of mass of one or more first track points, so that the accuracy of the center of mass of the one or more first track points can be improved.
It should be understood that the second track point, the third track point, the fourth track point, and the one or more fifth track points are examples, where the second track point refers to the track point with the largest timestamp among the one or more first track points, the third track point refers to the track point with the second largest timestamp among the one or more first track points, the fourth track point refers to the track point with the third largest timestamp among the one or more first track points, and the one or more fifth track points refer to the track point with the smaller timestamp than the fourth track point among the one or more first track points.
In practical application, if the track of the target object is in a real-time updating state, under the condition that the connection mode between the track point with the minimum timestamp and the track point with the third largest timestamp is determined, the track point with the maximum timestamp can be connected with the track point with the second largest timestamp or the track point with the maximum timestamp is connected with the track point with the third largest timestamp according to the implementation mode, and then the track area of the target object is determined.
For example, at time t1, one or more first track points include track point a, track point B, track point C and track point D shown in fig. 8, i.e., at time t1, track point a is a fifth track point, track point B is a fourth track point, track point C is a third track point, and track point D is a second track point. At time t1, one or more first trace point areas are the areas enclosed by the triangular ABD in FIG. 8.
After t2 seconds, at time t2, one or more first trace points are added to the trace point E in fig. 7. At the moment, the track point A and the track point B are both fifth track points, the track point C is a fourth track point, the track point D is a third track point, and the track point E is a second track point. The image processing device firstly determines whether the track point E is connected with the track point D or the track point C, and under the condition that the track point E is determined to be connected with the track point C, the image processing device removes the connecting line between the track point C and the track point D, and then determines that one or more first track point areas are the areas surrounded by the polygon ABCE.
As an alternative embodiment, the image processing apparatus performs the following steps in the process of performing step 1:
9. and determining n track point data with the largest timestamp in the one or more first track point data as one or more effective track point data.
Since the number of the first trajectory point data may be large, the image processing apparatus will process all the first trajectory point data with a large data processing amount. In consideration of timeliness of the trajectory point data of the target object, the image processing apparatus selects n trajectory point data having the largest time stamp from the one or more first trajectory point data as one or more effective trajectory point data. Therefore, the effectiveness of the track point data (namely the first track point data) of the target object is ensured, the quantity of the track point data of the target object is reduced, and the data processing amount caused by processing the track point data of the target object is further reduced.
In the embodiment of the application, n is a positive integer, and the specific value of n can be set according to actual requirements. For example, if a user wants to reduce the data processing amount and increase the processing speed, the value of n may be set to be small, for example, the value of n is set to 80; the user wants to show more trace point data of the target object, and the value of n can be increased, for example, the value of n is 200.
The image processing device determines n track point data with the largest timestamp in one or more track point data, namely n first track point data closest to the target object, and takes the n first track point data as effective track point data to obtain one or more effective track point data.
For example, assuming that n is 3, the one or more first track point data includes track point data 1, track point data 2, track point data 3, track point data 4, and track point data 5, where the timestamp of the track point data 1 is t1, the timestamp of the track point data 2 is t2, the timestamp of the track point data 3 is t3, the timestamp of the track point data 4 is t4, and the timestamp of the track point data 5 is t 5. If t1 is earlier than t2, t2 is earlier than t3, t3 is earlier than t4, and t4 is earlier than t5, the effective track point data includes track point data 1, track point data 2, and track point data 3.
10. And determining one or more first track points of the target object according to the one or more effective track point data.
The image processing means determines a valid trajectory point on the basis of the position and the time stamp in the valid trajectory point data. The image processing device may determine one or more valid trajectory points from the location and the timestamp in the one or more valid trajectory point data. After obtaining the one or more effective track points, the image processing device takes the one or more effective track points as one or more first track points.
For example, the one or more valid trace point data includes trace point data 1, trace point data 2, and trace point data 3, the trace point data 1 includes a position 1 and a timestamp t1, the trace point data 2 includes a position 2 and a timestamp t2, and the trace point data 3 includes a position 3 and a timestamp t 3. The image processing apparatus can obtain the effective track point 1 according to the track point data 1, where the effective track point 1 is the position 1 where the target object appears at t 1. The image processing apparatus can obtain the effective track point 2 according to the track point data 2, wherein the effective track point 2 is the position 2 where the target object appears at t 2. The image processing apparatus can obtain the effective track point 3 according to the track point data 3, wherein the effective track point 3 is the position 3 where the target object appears at t 3.
By executing step 9 and step 10, the image processing apparatus can reduce the amount of data processing and increase the processing speed.
As an optional implementation manner, the trajectory of the target object includes a first trajectory and a second trajectory, the first trajectory is a trajectory between a trajectory point with a largest timestamp and a trajectory point with a largest timestamp, the second trajectory is a trajectory other than the first trajectory in the trajectory of the target object, and a display manner of the first trajectory is different from a display manner of the second trajectory.
In this embodiment of the application, the first track is a track between the track point with the mth timestamp and the track point with the largest timestamp, that is, the first track includes the m track points with the largest timestamps in the track of the target object.
For example, suppose m is 2, the track of the target object includes track point 1, track point 2, track point 3, track point 4, track point 5, wherein the timestamp of track point 1 is t1, the timestamp of track point 2 is t2, the timestamp of track point 3 is t3, the timestamp of track point 4 is t4, and the timestamp of track point 5 is t 5. If t1 is earlier than t2, t2 is earlier than t3, t3 is earlier than t4, and t4 is earlier than t5, the first track includes track point 1 and track point 2.
Because the track does not have direction information, in order to facilitate the user to distinguish the moving direction of the target object, the image processing device distinguishes the first track and the second track through the first track and the second track in different display modes.
Optionally, the display mode includes one of the following: color, the false or true of the trace line, whether the trace line carries an arrow or not.
For example, in the case where the display manner includes colors, the image processing apparatus displays the first trajectory in blue and displays the second trajectory in red. In this way, the user can distinguish the first track from the second track by color, and can further determine that the moving direction of the target object is from the second track to the first track.
For another example, when the display mode includes an imaginary line and a real line of the trajectory line, the image processing apparatus displays the first trajectory as a solid line and displays the second trajectory as a broken line. Therefore, the user distinguishes the first track from the second track through the virtual and real of the track line, and then the moving direction of the target object can be determined.
For another example, in the case that the display mode includes whether the trace line carries an arrow, the image processing apparatus displays the first trace with the line carrying the arrow, and the arrow points to the trace point with the largest timestamp, and displays the second trace with the line not carrying the arrow. Therefore, the user distinguishes the first track from the second track through whether the track line carries an arrow or not, and then the moving direction of the target object can be determined.
As an optional implementation manner, the first three-dimensional model includes a first floor and a second floor, and the height of the first floor is different from the height of the second floor. The trajectory of the target object is within a first floor.
In the embodiment of the application, the first floor and the second floor are any two layers in the first three-dimensional model, and the heights of the first floor and the second floor are different, namely the first floor and the second floor are two different floors.
In the case where the trajectory of the target object is within the first floor, the image processing apparatus displays the trajectory of the target object within the first three-dimensional model, i.e., displays the trajectory of the target object within the first floor, i.e., the image processing apparatus displays the first floor within the first three-dimensional model while displaying the trajectory of the target object. For example, the image processing apparatus highlights a first three-dimensional region within the first floor and displays the trajectory of the target object within the first three-dimensional region.
In one possible implementation, the image processing apparatus displays the first floor in the first three-dimensional model by hiding floors other than the first floor in the first three-dimensional model.
For example, the first three-dimensional model has five levels, and the first floor is the fourth level in the first three-dimensional model. The image processing apparatus may hide the fifth layer in the first three-dimensional model, thereby displaying the first floor while displaying the appearance of the first three-dimensional model. Because the first floor is the fourth floor, will shelter from first floor, second floor and third floor when showing first floor, like this, can avoid when showing first three-dimensional model, interference that first floor, second floor and third floor brought to the demonstration of first floor to promote the display effect of first floor. Alternatively, the display effect can be seen in fig. 9.
In this embodiment, the image processing apparatus further performs the steps of:
11. and displaying the second floor in the first three-dimensional model when the target object is determined to appear on the second floor.
In order to track the trajectory of the displayed target object, the image processing device should display the second floor within the first three-dimensional model in the case where the target object appears on the second floor. In one possible implementation, the image processing apparatus switches the display contents within the first three-dimensional model from the first floor to the second floor in a case where it is determined that the target object appears on the second floor.
For example, as shown in fig. 9, the image processing apparatus displays a first floor in the first three-dimensional model, and the image processing apparatus switches the display content of the first three-dimensional model from the first floor to the second floor may display the content shown in fig. 10.
In this possible implementation manner, the image processing apparatus may improve the display effect of the trajectory of the target object by switching the display content within the first three-dimensional model from the first floor to the second floor.
In one implementation of determining that the target object is present at the second floor, the image processing apparatus determines that the target object is present at the second floor upon detecting an instruction that the target object is present at the second floor.
For example, the user may input an instruction that the target object appears at the second floor to the image processing apparatus through the input component, and the image processing apparatus determines that the target object appears at the second floor in a case where the instruction is detected.
In another implementation manner of determining that the target object appears on the second floor, the image processing device determines that the target object appears on the second floor when determining that the image acquired by the camera of the second floor contains the target object.
In yet another implementation of determining that the target object appears on the second floor, the image processing device obtains a predicted trajectory of the target object according to the trajectory of the target object and a moving direction of the target object. The image processing device determines that the target object appears on the second floor in a case where the target object appears on the second floor in accordance with the predicted trajectory.
For example, if the image processing apparatus determines that the target object appears on the second floor at time t2 based on the predicted trajectory, the image processing apparatus determines that the target object appears on the second floor at time t2, and displays the second floor within the first three-dimensional model at time t 2.
As an alternative embodiment, before determining that the target object is present on the second floor, the image processing apparatus further performs the following steps:
12. and acquiring an image containing the target object.
In the embodiment of the present application, the target object may be any object. In one possible implementation, the target object includes one of: human body, human face, vehicle.
In one implementation of acquiring an image including a target object, an image processing apparatus receives an input of an image including a target object by a user through an input component to acquire an image including a target object.
In another implementation of acquiring an image containing a target object, the image processing device receives the image containing the target object sent by the terminal to acquire the image containing the target object.
In yet another implementation of acquiring an image containing a target object, the image processing device is loaded with a camera. The image processing apparatus obtains an image including a target object by photographing the target object using a camera.
In yet another implementation of acquiring an image containing a target object, there is a communication connection between the image processing apparatus and the camera. The camera obtains an image containing the target object by shooting the target object, and the image processing device acquires the image containing the target object from the camera through the communication connection.
After performing step 12, the image processing apparatus determines that the target object is present on the second floor by performing the following steps:
13. and determining that the second image contains the target object by comparing the first image with the second image, wherein the second image is acquired by a camera on the second floor.
In the embodiment of the present application, comparing the first image with the second image refers to comparing the similarity between the first image and the second image to determine whether the second image includes the target object.
In a possible implementation manner, in the case that the target object is a human face, the image processing apparatus may determine whether the second image includes the target object by comparing the first image with the second image;
in another possible implementation manner, in the case that the target object is a human body, the image processing apparatus may determine whether the second image includes the target object by comparing the first image with the second image;
in yet another possible implementation manner, in a case where the target object is a vehicle, the image processing apparatus may determine whether the second image includes the target object by comparing the first image with the second image in the vehicle.
Since the second image is an image acquired by a camera in the second floor, it is determined that the target object appears on the second floor when it is determined that the second image contains the target object.
Based on the technical scheme provided by the embodiment of the application, the embodiment of the application also provides a possible application scene.
In order to improve the social security control capability and maintain a good social security environment, monitoring cameras are arranged in more and more places, and when related personnel need to find a target person, the track of the target person can be determined according to images collected by the monitoring cameras. To facilitate the user in viewing the trajectory of the target person, the trajectory of the target person may be displayed in the scene. Based on the technical scheme provided by the embodiment of the application, the track of the target person can be displayed in the three-dimensional model of the building.
For example, when the person wants to track the trajectory of zhang san in building a, the person inputs the face image of zhang san to the server (i.e., the image processing apparatus). And the server further compares the face image of Zhang III with the image acquired by the monitoring camera in the building A in real time to determine whether Zhang III appears in the building A. And the server determines track point data of Zhang III in the building A according to the face comparison result. For example, the image processing apparatus determines that the image a contains zhang through the face comparison result, and the image a is acquired by the monitoring camera b at time t1, so that the image processing apparatus may obtain a piece of track point data according to the image a, where zhang appears at the position acquired by the monitoring camera b, and the occurrence time is t 1.
The image processing device processes the locus point data of Zhang III in the building A and the three-dimensional model of the building A according to the technical scheme provided by the embodiment of the application so as to display the locus of Zhang III in the three-dimensional model of the building A.
Optionally, in order to track the track of zhangsan in real time, the image processing apparatus may highlight the first three-dimensional region including the track point of zhangsan in building a based on the technical solution provided in the embodiment of the present application, and display the track of zhangsan in the first three-dimensional region.
Alternatively, when the image processing apparatus determines that the program code of zhang moves from the c floor to the d floor of the building a, the floor displayed in the three-dimensional model in the building a is switched from the c floor to the d floor.
As an alternative embodiment, in the case that the number of the first trajectory point data is greater than 1, the image processing apparatus acquires one or more first trajectory point data of the target object within the target scene by performing the following steps:
14. and acquiring two or more second track point data and a distance threshold of the target object in the target scene.
In the present embodiment, the number of the second trajectory point data may be 2 or more than 2. The second trajectory point data includes a location and a timestamp. The distance threshold is a positive number, and the specific value of the distance threshold can be set according to actual requirements.
15. And determining the distance between the adjacent track point data sets, wherein the adjacent track point data sets comprise two adjacent second track point data with the timestamps.
In this embodiment, the adjacent track point dataset includes two second track point data adjacent to each other with the timestamp, and the distance of the adjacent track point dataset refers to the distance between two track point data in the adjacent track point data set.
For example, the two or more second trace point data include: the track point data comprises track point data a, track point data b and track point data c, wherein the timestamp of the track point data a is t1, the timestamp of the track point data b is t2, and the timestamp of the track point data c is t 3. If t1 is greater than t2 and t2 is greater than t3, an adjacent track point data set a and an adjacent track point data set B may be determined from two or more second track point data, where the adjacent track point data set a includes track point data a and track point data B, and the adjacent track point data set B includes track point data B and track point data c.
If the position of the track point data a is p1, the position of the track point data B is p2, the position of the track point data c is p3, the distance between p1 and p2 is d1, and the distance between p2 and p3 is d2, the distance between the adjacent track point data set a is d1, and the distance between the adjacent track point data set B is d 2.
To reduce the data processing amount of the image processing device, the second trajectory point data at closer distances may be merged before processing one or more second trajectory point data of the target object within the scene. Optionally, the image processing apparatus may delete the second trajectory point data with the smaller timestamp to merge the second trajectory point data with the closer distance.
In the present embodiment, the image processing apparatus determines whether the distance between adjacent track point data sets is short based on the distance threshold, and obtains one or more first track point data based on the determination result and two or more first track point data. Specifically, the image processing apparatus obtains one or more first trajectory point data by executing one of steps 16 and 17:
16. and under the condition that the distance is smaller than the distance threshold, removing the old track point data from the two or more second track point data to obtain the one or more first track point data, wherein the old track point data is the second track point data with the smallest timestamp in the adjacent track point data set.
17. And if the distance is smaller than the distance threshold, using the two or more second trajectory point data as the one or more first trajectory point data.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
The method of the embodiments of the present application is set forth above in detail and the apparatus of the embodiments of the present application is provided below.
Referring to fig. 11, fig. 11 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application, where the image processing apparatus 1 includes: an acquisition unit 11, a processing unit 12, wherein:
an obtaining unit 11, configured to obtain a first three-dimensional model of a target scene and one or more first trajectory point data of a target object within the target scene;
a processing unit 12 for displaying the trajectory of the target object within the first three-dimensional model in dependence on the one or more first trajectory point data.
In combination with any embodiment of the present application, the processing unit 12 is configured to:
determining one or more first track points of the target object according to the one or more first track point data;
determining a first three-dimensional region from the first three-dimensional model that includes the one or more first trajectory points;
and displaying the first three-dimensional area in a preset display mode in the first three-dimensional model, and displaying the track of the target object in the first three-dimensional area.
In combination with any embodiment of the present application, the processing unit 12 is configured to:
acquiring the size of the first three-dimensional region and the shape of the first three-dimensional region;
determining a centroid of the one or more first trace points;
the center of mass is used as the center of the first three-dimensional area, and the first three-dimensional area is determined from the first three-dimensional model according to the size of the first three-dimensional area and the shape of the first three-dimensional area.
In combination with any embodiment of the present application, the one or more first track points include a second track point, a third track point, a fourth track point, and one or more fifth track points, and a timestamp of the second track point, a timestamp of the third track point, a timestamp of the fourth track point, and a timestamp of the one or more fifth track points decrease in sequence;
the processing unit 12 is configured to:
under the condition that the third track point is located in a first to-be-confirmed area, determining a centroid of the first to-be-confirmed area as a centroid of the one or more first track points, wherein the first to-be-confirmed area is a polygonal area, and vertexes of the first to-be-confirmed area are the second track point, the fourth track point and the one or more fifth track points;
third track point is located under the condition that first treat affirmation the region, confirms that the second treats affirmation regional barycenter as the barycenter of one or more first track points, first treat affirmation regional with the second is treated affirmation region and is the polygon region, first treat affirmation regional summit do the second track point fourth track point one or more fifth track points, the second treat affirmation regional summit do the second track point third track point fourth track point one or more fifth track points.
In combination with any embodiment of the present application, the processing unit 12 is configured to:
determining n track point data with the largest timestamp in the one or more first track point data as one or more effective track point data;
and determining one or more first track points of the target object according to the one or more effective track point data.
With reference to any embodiment of the present application, in a case that the number of the first trajectory point data is greater than 1, the obtaining unit 11 is configured to:
acquiring two or more second track point data and a distance threshold of the target object in the target scene;
determining the distance between adjacent track point data sets, wherein the adjacent track point data sets comprise two second track point data with adjacent timestamps;
under the condition that the distance is smaller than the distance threshold value, removing old track point data from the two or more second track point data to obtain the one or more first track point data, wherein the old track point data is the second track point data with the smallest timestamp in the adjacent track point data set;
and taking the two or more second track point data as the more than one first track point data under the condition that the distance is smaller than the distance threshold value.
In this embodiment, the image processing apparatus displays the trajectory of the target object in the first three-dimensional model according to one or more first trajectory point data of the target object in the target scene, so that the trajectory of the target object in the target scene can be more intuitively displayed.
In some embodiments, functions of or modules included in the apparatus provided in the embodiments of the present application may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, will not be described again here.
Fig. 12 is a schematic diagram of a hardware structure of an image processing apparatus according to an embodiment of the present application. The image processing apparatus 2 includes a processor 21, a memory 22, an input device 23, and an output device 24. The processor 21, the memory 22, the input device 23 and the output device 24 are coupled by a connector, which includes various interfaces, transmission lines or buses, etc., and the embodiment of the present application is not limited thereto. It should be appreciated that in various embodiments of the present application, coupled refers to being interconnected in a particular manner, including being directly connected or indirectly connected through other devices, such as through various interfaces, transmission lines, buses, and the like.
The processor 21 may be one or more Graphics Processing Units (GPUs), and in the case that the processor 21 is one GPU, the GPU may be a single-core GPU or a multi-core GPU. Alternatively, the processor 21 may be a processor group composed of a plurality of GPUs, and the plurality of processors are coupled to each other through one or more buses. Alternatively, the processor may be other types of processors, and the like, and the embodiments of the present application are not limited.
Memory 22 may be used to store computer program instructions, as well as various types of computer program code for executing the program code of aspects of the present application. Alternatively, the memory includes, but is not limited to, Random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM), or compact disc read-only memory (CD-ROM), which is used for associated instructions and data.
The input means 23 are for inputting data and/or signals and the output means 24 are for outputting data and/or signals. The input device 23 and the output device 24 may be separate devices or may be an integral device.
It is understood that, in the embodiment of the present application, the memory 22 may be used to store not only the related instructions, but also the related data, for example, the memory 22 may be used to store the first three-dimensional model and the one or more first trajectory point data acquired through the input device 23, and the embodiment of the present application is not limited to the data specifically stored in the memory.
It will be appreciated that fig. 12 only shows a simplified design of an image processing apparatus. In practical applications, the image processing apparatuses may further include other necessary components, including but not limited to any number of input/output devices, processors, memories, etc., and all image processing apparatuses that can implement the embodiments of the present application are within the scope of the present application.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. It is also clear to those skilled in the art that the descriptions of the various embodiments of the present application have different emphasis, and for convenience and brevity of description, the same or similar parts may not be repeated in different embodiments, so that the parts that are not described or not described in detail in a certain embodiment may refer to the descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in or transmitted over a computer-readable storage medium. The computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)), or wirelessly (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., Digital Versatile Disk (DVD)), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
One of ordinary skill in the art will appreciate that all or part of the processes in the methods of the above embodiments may be implemented by hardware related to instructions of a computer program, which may be stored in a computer-readable storage medium, and when executed, may include the processes of the above method embodiments. And the aforementioned storage medium includes: various media that can store program codes, such as a read-only memory (ROM) or a Random Access Memory (RAM), a magnetic disk, or an optical disk.

Claims (12)

1. An image processing method, characterized in that the method comprises:
acquiring a first three-dimensional model of a target scene and one or more than one first track point data of a target object in the target scene;
displaying a trajectory of the target object within the first three-dimensional model in accordance with the one or more first trajectory point data.
2. The method of claim 1, wherein said displaying a trajectory of said target object within said first three-dimensional model from said one or more first trajectory point data comprises:
determining one or more first track points of the target object according to the one or more first track point data;
determining a first three-dimensional region from the first three-dimensional model that includes the one or more first trajectory points;
and displaying the first three-dimensional area in a preset display mode in the first three-dimensional model, and displaying the track of the target object in the first three-dimensional area.
3. The method of claim 2, wherein determining a first three-dimensional region containing the one or more first trajectory points from the first three-dimensional model comprises:
acquiring the size of the first three-dimensional region and the shape of the first three-dimensional region;
determining a centroid of the one or more first trace points;
the center of mass is used as the center of the first three-dimensional area, and the first three-dimensional area is determined from the first three-dimensional model according to the size of the first three-dimensional area and the shape of the first three-dimensional area.
4. The method of claim 3, wherein the one or more first trace points comprise a second trace point, a third trace point, a fourth trace point, one or more fifth trace points, and wherein the timestamps of the second trace point, the third trace point, the fourth trace point, and the one or more fifth trace points decrease in order;
the determining the centroid of the one or more first trace points comprises:
under the condition that the third track point is located in a first to-be-confirmed area, determining a centroid of the first to-be-confirmed area as a centroid of the one or more first track points, wherein the first to-be-confirmed area is a polygonal area, and vertexes of the first to-be-confirmed area are the second track point, the fourth track point and the one or more fifth track points;
third track point is located under the condition that first treat affirmation the region, confirms that the second treats affirmation regional barycenter as the barycenter of one or more first track points, first treat affirmation regional with the second is treated affirmation region and is the polygon region, first treat affirmation regional summit do the second track point fourth track point one or more fifth track points, the second treat affirmation regional summit do the second track point third track point fourth track point one or more fifth track points.
5. The method of any one of claims 2 to 4, wherein said determining one or more first trajectory points of the target object from the one or more first trajectory point data comprises:
determining n track point data with the largest timestamp in the one or more first track point data as one or more effective track point data;
and determining one or more first track points of the target object according to the one or more effective track point data.
6. The method according to any one of claims 1 to 5, wherein in a case that the number of the first trajectory point data is greater than 1, the obtaining one or more first trajectory point data of a target object within the target scene comprises:
acquiring two or more second track point data and a distance threshold of the target object in the target scene;
determining the distance between adjacent track point data sets, wherein the adjacent track point data sets comprise two second track point data with adjacent timestamps;
under the condition that the distance is smaller than the distance threshold value, removing old track point data from the two or more second track point data to obtain the one or more first track point data, wherein the old track point data is the second track point data with the smallest timestamp in the adjacent track point data set;
and taking the two or more second track point data as the more than one first track point data under the condition that the distance is smaller than the distance threshold value.
7. The method according to any one of claims 1 to 6, wherein the trajectory of the target object includes a first trajectory and a second trajectory, the first trajectory is a trajectory between a trajectory point with a largest timestamp and a trajectory point with a largest timestamp, the second trajectory is a trajectory other than the first trajectory in the trajectory of the target object, and the first trajectory is displayed in a different manner from the second trajectory.
8. The method of any one of claims 1 to 7, wherein the first three-dimensional model comprises a first floor and a second floor, the first floor having a different height than the second floor, the trajectory of the target object being within the first floor, the method further comprising:
displaying the second floor within the first three-dimensional model if it is determined that the target object is present at the second floor.
9. The method of claim 8, wherein the determining that the target object appears in front of the second floor, further comprises:
acquiring a first image containing the target object;
the determining that the target object is present at the second floor comprises:
and determining that the second image contains the target object by comparing the first image with the second image, wherein the second image is acquired by a camera of the second floor.
10. An image processing apparatus, characterized in that the apparatus comprises:
the system comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring a first three-dimensional model of a target scene and one or more first track point data of a target object in the target scene;
a processing unit for displaying the trajectory of the target object within the first three-dimensional model according to the one or more first trajectory point data.
11. An electronic device, comprising: a processor and a memory for storing computer program code comprising computer instructions which, when executed by the processor, cause the electronic device to perform the method of any of claims 1 to 9.
12. A computer-readable storage medium, in which a computer program is stored, which computer program comprises program instructions which, if executed by a processor, cause the processor to carry out the method of any one of claims 1 to 9.
CN202110728309.2A 2021-06-29 2021-06-29 Image processing method and device, electronic equipment and computer readable storage medium Withdrawn CN113643320A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110728309.2A CN113643320A (en) 2021-06-29 2021-06-29 Image processing method and device, electronic equipment and computer readable storage medium
PCT/CN2021/134883 WO2023273154A1 (en) 2021-06-29 2021-12-01 Image processing method and apparatus, and device, medium and program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110728309.2A CN113643320A (en) 2021-06-29 2021-06-29 Image processing method and device, electronic equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN113643320A true CN113643320A (en) 2021-11-12

Family

ID=78416321

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110728309.2A Withdrawn CN113643320A (en) 2021-06-29 2021-06-29 Image processing method and device, electronic equipment and computer readable storage medium

Country Status (2)

Country Link
CN (1) CN113643320A (en)
WO (1) WO2023273154A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023273154A1 (en) * 2021-06-29 2023-01-05 西安商汤智能科技有限公司 Image processing method and apparatus, and device, medium and program

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5382007B2 (en) * 2010-02-22 2014-01-08 株式会社デンソー Moving track display device
CN108257146B (en) * 2018-01-15 2020-08-18 新疆大学 Motion trail display method and device
CN112434557A (en) * 2020-10-20 2021-03-02 深圳市华橙数字科技有限公司 Three-dimensional display method and device of motion trail, terminal and storage medium
CN113643320A (en) * 2021-06-29 2021-11-12 西安商汤智能科技有限公司 Image processing method and device, electronic equipment and computer readable storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023273154A1 (en) * 2021-06-29 2023-01-05 西安商汤智能科技有限公司 Image processing method and apparatus, and device, medium and program

Also Published As

Publication number Publication date
WO2023273154A1 (en) 2023-01-05

Similar Documents

Publication Publication Date Title
US11842438B2 (en) Method and terminal device for determining occluded area of virtual object
CN109064390B (en) Image processing method, image processing device and mobile terminal
EP3039655B1 (en) System and method for determining the extent of a plane in an augmented reality environment
US11893702B2 (en) Virtual object processing method and apparatus, and storage medium and electronic device
EP2814000A1 (en) Image processing apparatus, image processing method, and program
CN108174152A (en) A kind of target monitoring method and target monitor system
CN103679788B (en) The generation method and device of 3D rendering in a kind of mobile terminal
CN110378947B (en) 3D model reconstruction method and device and electronic equipment
CN111957040A (en) Method and device for detecting shielding position, processor and electronic device
CN110069125B (en) Virtual object control method and device
CN115439543B (en) Method for determining hole position and method for generating three-dimensional model in meta universe
CN113470112A (en) Image processing method, image processing device, storage medium and terminal
CN110286906B (en) User interface display method and device, storage medium and mobile terminal
CN113643320A (en) Image processing method and device, electronic equipment and computer readable storage medium
CN110313021B (en) Augmented reality providing method, apparatus, and computer-readable recording medium
CN113469021A (en) Video processing apparatus, electronic device, and computer-readable storage medium
CN108874141B (en) Somatosensory browsing method and device
CN110089076B (en) Method and device for realizing information interaction
WO2023273155A1 (en) Image processing method and apparatus, and electronic device, computer-readable storage medium and computer program product
CN112465692A (en) Image processing method, device, equipment and storage medium
CN110941327A (en) Virtual object display method and device
CN114494960A (en) Video processing method and device, electronic equipment and computer readable storage medium
KR20140103021A (en) Object recognition device
CN115115699A (en) Attitude estimation method and device, related equipment and computer product
CN114373064B (en) VRAR content editing method, VRAR content editing device, VRAR content editing equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40055714

Country of ref document: HK

WW01 Invention patent application withdrawn after publication

Application publication date: 20211112

WW01 Invention patent application withdrawn after publication