CN112700491B - Method and device for determining visual field dividing line - Google Patents

Method and device for determining visual field dividing line Download PDF

Info

Publication number
CN112700491B
CN112700491B CN201911014273.0A CN201911014273A CN112700491B CN 112700491 B CN112700491 B CN 112700491B CN 201911014273 A CN201911014273 A CN 201911014273A CN 112700491 B CN112700491 B CN 112700491B
Authority
CN
China
Prior art keywords
coordinate position
frame image
target object
determining
historical frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911014273.0A
Other languages
Chinese (zh)
Other versions
CN112700491A (en
Inventor
李彦勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Horizon Robotics Technology Research and Development Co Ltd
Original Assignee
Beijing Horizon Robotics Technology Research and Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Horizon Robotics Technology Research and Development Co Ltd filed Critical Beijing Horizon Robotics Technology Research and Development Co Ltd
Priority to CN201911014273.0A priority Critical patent/CN112700491B/en
Publication of CN112700491A publication Critical patent/CN112700491A/en
Application granted granted Critical
Publication of CN112700491B publication Critical patent/CN112700491B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

A method, apparatus, computer-readable storage medium, and electronic device for determining a visual field dividing line are disclosed, the method including: at least one target object meeting a first preset condition is determined in at least one frame of historical frame image acquired by an image acquisition device; determining a first coordinate position of each of at least one target object in a corresponding first historical frame image in at least one historical frame image; determining a second coordinate position of each of the at least one target object in a corresponding second historical frame image in the at least one historical frame image; and fitting a visual field dividing line in the current visual field of the image acquisition device according to the at least one first coordinate position and the at least one second coordinate position. According to the method and the device, the visual field dividing line in the current visual field of the image acquisition device is obtained according to the first coordinate position of the target object entering the current visual field of the image acquisition device and the second coordinate position leaving the current visual field of the image acquisition device, and the accuracy of the visual field dividing line is high.

Description

Method and device for determining visual field dividing line
Technical Field
The application relates to the field of image processing, in particular to a method and a device for determining a visual field dividing line.
Background
In a business scenario where it is required to count the flow of a target object (e.g., a pedestrian or a vehicle) or identify whether the target object enters a specific area, it is generally required to correspondingly set an image acquisition device to acquire an image of the corresponding area. After the visual field dividing line of the image acquisition device is determined, the flow of the target object can be counted based on the determined visual field dividing line, and whether the target object enters a specific area or other related services can be identified.
Currently, the visual field dividing line of the image acquisition device is mainly determined in a manual scribing mode, and the visual field dividing line is seriously dependent on the working experience of staff, so that the accuracy of the determined visual field dividing line is lower.
Disclosure of Invention
The present application has been made to solve the above-mentioned technical problems. The embodiment of the application provides a method and a device for determining a visual field dividing line, a computer readable storage medium and electronic equipment, wherein the method and the device are used for acquiring the visual field dividing line in the current visual field of an image acquisition device according to a first coordinate position of a target object entering the current visual field of the image acquisition device and a second coordinate position leaving the current visual field of the image acquisition device, and the manual participation is not needed in the process of determining the visual field dividing line, so that errors caused by human factors are avoided, and the accuracy of the determined visual field dividing line is higher.
According to a first aspect of the present application, there is provided a method of determining a visual field dividing line, comprising:
at least one target object meeting a first preset condition is determined in at least one frame of historical frame image before the current frame image acquired by the image acquisition device;
determining first coordinate positions of the at least one target object in corresponding first historical frame images in the at least one historical frame image respectively to obtain at least one first coordinate position;
determining second coordinate positions of the at least one target object in a second historical frame image corresponding to the at least one historical frame image respectively to obtain at least one second coordinate position;
fitting a view dividing line in the current view of the image acquisition device according to the at least one first coordinate position and the at least one second coordinate position.
According to a second aspect of the present application, there is provided a determination device of a visual field dividing line, comprising:
the first determining module is used for determining at least one target object meeting a first preset condition in at least one frame of historical frame image before the current frame image acquired by the image acquisition device;
the second determining module is used for determining first coordinate positions of the at least one target object in a corresponding first historical frame image in the at least one historical frame image respectively to obtain at least one first coordinate position;
a third determining module, configured to determine second coordinate positions of the at least one target object in a second history frame image corresponding to the at least one history frame image, to obtain at least one second coordinate position;
and the parting line fitting module is used for fitting a visual field parting line in the current visual field of the image acquisition device according to the at least one first coordinate position and the at least one second coordinate position.
According to a third aspect of the present application, there is provided a computer-readable storage medium storing a computer program for executing the above-described method of determining a visual field dividing line.
According to a fourth aspect of the present application, there is provided an electronic device comprising:
a processor;
a memory for storing the processor-executable instructions;
the processor is configured to read the executable instruction from the memory, and execute the executable instruction to implement the method for determining the view dividing line.
The method and the device for determining the visual field dividing line, the computer readable storage medium and the electronic equipment provided by the application at least comprise the following beneficial effects:
on the one hand, in this embodiment, at least one target object meeting a first preset condition is determined in at least one historical frame image before a current frame image acquired by an image acquisition device, then a first coordinate position of the target object in the first historical frame image, that is, a start coordinate position of the target object entering the current field of view of the image acquisition device, is determined, a second coordinate position of the target object in a second historical frame image, that is, an end coordinate position of the target object leaving the current field of view of the image acquisition device is further determined, and then a field dividing line in the current field of view of the image acquisition device is acquired according to the first coordinate position and the second coordinate position corresponding to each target object. Therefore, according to the method for determining the visual field dividing line, the visual field dividing line in the current visual field of the image acquisition device is obtained according to the first coordinate position of the target object entering the current visual field of the image acquisition device and the second coordinate position of the target object leaving the current visual field of the image acquisition device, and the manual participation is not needed in the process of determining the visual field dividing line, so that errors caused by human factors are avoided, and the accuracy of the determined visual field dividing line is high.
On the other hand, the method for determining the view dividing line provided by the embodiment can be automatically determined by the electronic equipment by setting a computer program, so that the determining efficiency of the view dividing line is improved.
Drawings
The above and other objects, features and advantages of the present application will become more apparent by describing embodiments of the present application in more detail with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of embodiments of the application and are incorporated in and constitute a part of this specification, illustrate the application and together with the embodiments of the application, and not constitute a limitation to the application. In the drawings, like reference numerals generally refer to like parts or steps.
FIG. 1 is a flow chart illustrating a method for determining a view dividing line according to an exemplary embodiment of the present application;
FIG. 2 is a schematic diagram of position information of a target object in different images in a method for determining a view dividing line according to an exemplary embodiment of the present application;
FIG. 3 is a second flow chart of a method for determining a view dividing line according to an exemplary embodiment of the present application;
FIG. 4 is a flowchart illustrating step 102 in a method for determining a view dividing line according to an exemplary embodiment of the present application;
FIG. 5 is a schematic diagram of a first detection frame in a method for determining a view dividing line according to an exemplary embodiment of the present application;
FIG. 6 is a flowchart illustrating step 103 in a method for determining a view dividing line according to an exemplary embodiment of the present application;
FIG. 7 is a flowchart illustrating a method for determining a view dividing line according to an exemplary embodiment of the present application;
FIG. 8 is a flowchart of step 104 in a method for determining a view dividing line according to an exemplary embodiment of the present application;
FIG. 9 is a flowchart illustrating a step 1042 in a method for determining a view dividing line according to an exemplary embodiment of the present application;
FIG. 10 is a schematic diagram of different view dividing lines determined by a method for determining a view dividing line according to an exemplary embodiment of the present application;
fig. 11 is a schematic structural view of a determination device of a visual field dividing line provided by the first exemplary embodiment of the present application;
fig. 12 is a schematic structural view of a determination device for a visual field dividing line provided by a second exemplary embodiment of the present application;
fig. 13 is a schematic structural view of a determination device for a visual field dividing line according to a third exemplary embodiment of the present application;
fig. 14 is a schematic structural view of a determination device for a visual field dividing line provided by a fourth exemplary embodiment of the present application;
fig. 15 is a schematic structural view of a determination device of a visual field dividing line provided by a fifth exemplary embodiment of the present application;
fig. 16 is a schematic structural view of a parting line fitting unit in the apparatus for determining a parting line of a field of view according to an exemplary embodiment of the present application;
fig. 17 is a schematic structural diagram of an electronic device according to an exemplary embodiment of the present application.
Detailed Description
Hereinafter, exemplary embodiments according to the present application will be described in detail with reference to the accompanying drawings. It should be apparent that the described embodiments are only some embodiments of the present application and not all embodiments of the present application, and it should be understood that the present application is not limited by the example embodiments described herein.
Summary of the application
When the flow statistics of the target object or the identification of whether the target object enters the service scene of the specific area are performed by using the image acquired by the image acquisition device, the visual field dividing line of the image acquisition device needs to be determined. At present, the visual field dividing line of the image acquisition device is mainly determined in a manual scribing mode, no flow reference of a target object exists during manual scribing, and the method is seriously dependent on working experience of staff, so that the accuracy of the determined visual field dividing line is lower.
According to the method for determining the visual field dividing line, at least one target object meeting the first preset condition is determined in at least one historical frame image before the current frame image acquired by the image acquisition device, then at least one target object is determined, namely, the corresponding first coordinate position in the first historical frame image, namely, the starting point coordinate position of the target object entering the current visual field of the image acquisition device, further the second coordinate position of the target object in the second historical frame image, namely, the end point coordinate position of the target object leaving the current visual field of the image acquisition device, and the visual field dividing line in the current visual field of the image acquisition device is acquired according to the first coordinate position and the second coordinate position of the target object.
Having described the basic idea of the application, various non-limiting embodiments of the present solution will be described in detail below with reference to the accompanying drawings.
Exemplary method
Fig. 1 is a flowchart of a method for determining a view dividing line according to an exemplary embodiment of the present application.
The embodiment can be applied to electronic equipment, and particularly can be applied to a server or a general computer. As shown in fig. 1, the method for determining a view dividing line according to an exemplary embodiment of the present application at least includes the following steps:
step 101: at least one target object meeting a first preset condition is determined in at least one frame of historical frame image before the current frame image acquired by the image acquisition device.
In an embodiment, the at least one frame history frame data may be a frame image of a set number of frames, for example, a 10 frame history frame image preceding the current frame image.
In an embodiment, the first preset condition is that the target object has left the current field of view of the image capturing device, and the target object identified in the history frame image but not identified in the current frame image is a target object meeting the preset condition.
In an embodiment, the target object is an object, such as a person or a vehicle, that needs to be focused on in the image acquired by the image acquisition device.
Step 102: and determining first coordinate positions of at least one target object in corresponding first historical frame images in at least one historical frame image respectively to obtain at least one first coordinate position.
In an embodiment, the first history frame image is an image corresponding to a target object when the target object is first identified in at least one frame of history frame image; the first coordinate position is the pixel coordinate of the target object in the first history frame image, that is, the first coordinate position is the starting point coordinate position of the target object entering the current field of view of the image acquisition device (as shown in fig. 2).
Step 103, determining second coordinate positions of at least one target object in a corresponding second historical frame image in at least one historical frame image, and obtaining at least one second coordinate position.
In an embodiment, the second history frame image is an image corresponding to a target object when the target object is last identified in the at least one history frame image; the second coordinate position is the pixel coordinate of the target object in the second history frame image, that is, the second coordinate position is the end point coordinate position (as shown in fig. 2) of the target object leaving the current field of view of the image acquisition device.
Step 104: and fitting a visual field dividing line in the current visual field of the image acquisition device according to the at least one first coordinate position and the at least one second coordinate position.
In an embodiment, the view dividing line is a virtual target object flow trend detection line, and is obtained by fitting according to a first coordinate position and a second coordinate position corresponding to each target object.
The method for determining the visual field dividing line provided by the embodiment has the beneficial effects that:
according to the embodiment, at least one target object meeting a first preset condition is determined in at least one historical frame image before a current frame image acquired by an image acquisition device, then a first coordinate position of the target object in the first historical frame image, namely a starting point coordinate position of the target object entering a current field of view of the image acquisition device, is determined, a second coordinate position of the target object in a second historical frame image, namely an end point coordinate position of the target object leaving the current field of view of the image acquisition device is further determined, and then a field dividing line in the current field of view of the image acquisition device is acquired according to the first coordinate position and the second coordinate position corresponding to each target object. Therefore, according to the method for determining the visual field dividing line, the visual field dividing line in the current visual field of the image acquisition device is obtained according to the first coordinate position of the target object entering the current visual field of the image acquisition device and the second coordinate position of the target object leaving the current visual field of the image acquisition device, and the manual participation is not needed in the process of determining the visual field dividing line, so that errors caused by human factors are avoided, and the accuracy of the determined visual field dividing line is high. In addition, the method for determining the visual field dividing line can be automatically determined by the electronic equipment through setting a computer program, so that the determining efficiency of the visual field dividing line is improved.
Fig. 3 is a schematic flow chart further included before the step of determining the first coordinate positions of the at least one target object in the corresponding first historical frame image in the at least one historical frame image to obtain the at least one first coordinate position in the embodiment shown in fig. 1.
As shown in fig. 3, in an exemplary embodiment of the present application based on the embodiment shown in fig. 1, before the step of obtaining at least one first coordinate position shown in step 102, the method may specifically further include the following steps:
step 105: and acquiring the tracking code of at least one target object to obtain at least one tracking code.
In one embodiment, the tracking code is an identification code with a unique identifier allocated to the target object, different target objects correspond to different tracking codes, and the same target object has the same tracking code in different frame images. Specifically, when the target object is first identified, a tracking code having a unique identification is assigned to the target object.
Step 106: according to at least one tracking code, a first historical frame image and a second historical frame image which are respectively corresponding to at least one target object in at least one frame of historical frame image are determined.
In an embodiment, after determining the tracking code of the target object, determining that the image appearing for the first time of the tracking code is the first history frame image corresponding to the target object, and determining that the image appearing for the last time of the tracking code is the second history frame image corresponding to the target object.
It should be noted that, since the tracking code is allocated to the target object when the target object is identified for the first time, when at least one target object meeting the first preset condition is determined in step 10, the determination may be performed by comparing the tracking codes between different frame images.
According to the method and the device for determining the first historical frame image and the second historical frame image, the first historical frame image and the second historical frame image corresponding to the target object can be determined rapidly through determining the tracking code of the target object, and the determining efficiency of the first historical frame image and the second historical frame image is improved effectively.
Fig. 4 is a flowchart illustrating a step of determining a first coordinate position of each of at least one target object in a corresponding first historical frame image in at least one historical frame image to obtain at least one first coordinate position in the embodiment shown in fig. 1.
As shown in fig. 4, in an exemplary embodiment of the present application based on the embodiment shown in fig. 1, the step of obtaining at least one first coordinate position shown in step 102 may specifically include the following steps:
step 1021: determining first detection frames of corresponding first historical frame images of at least one target object in at least one historical frame image respectively, and obtaining at least one first detection frame.
In one embodiment, as shown in fig. 5, the first detection frame is a frame line in the first history frame image, where the frame line identifies the target object, and may be used to indicate the existence of the target object and the position of the target object in the first history frame image.
Step 1022: and determining the first coordinate positions of the designated positions of the at least one first detection frame in the first historical frame image respectively to obtain at least one first coordinate position.
In an embodiment, the pixel coordinates corresponding to the specified positions of the first detection frames of the target objects in the first historical frame image are determined as the first coordinate positions of the target objects, so that the determined first coordinate positions of the target objects can be guaranteed to have comparability, wherein the specified positions can be the center point of the first detection frames, and of course, a user can determine different specified positions according to actual service scenes.
According to the method, the first detection frame corresponding to each target object is determined, and the pixel coordinates of the designated position of the first detection frame are further determined to be the first coordinate positions of the corresponding target objects, so that the acquired first coordinate positions corresponding to each target object are comparable, the pixel coordinates randomly selected in the first detection frame and the inner area of the first detection frame are prevented from being determined to be the first coordinate positions, and the accuracy of the visual field dividing line determined by the first coordinate positions is high.
FIG. 6 is a flowchart illustrating the step of determining a second coordinate position of each of the at least one target object in a corresponding second historical frame image of the at least one historical frame image to obtain at least one second coordinate position in the embodiment shown in FIG. 4.
As shown in fig. 6, in an exemplary embodiment of the present application based on the embodiment shown in fig. 4, the step of obtaining at least one second coordinate position shown in step 103 may specifically include the following steps:
step 1031: and determining second detection frames in the corresponding second historical frame images of the at least one target object in the at least one historical frame image respectively to obtain at least one second detection frame.
In an embodiment, the second detection frame is a frame line in the second history frame image, where the frame line identifies the target object, and may be used to indicate the existence of the target object and the position of the target object in the second history frame image.
Step 1032: and determining the second coordinate positions of the designated positions of the at least one second detection frame in the second historical frame image respectively to obtain at least one second coordinate position.
In an embodiment, the pixel coordinates corresponding to the designated position of the second detection frame of the target object in the second history frame image are determined as the second coordinate positions of the target object, so that the determined second coordinate positions of the target objects can be guaranteed to have comparability. Since the first coordinate position and the second coordinate position are both pixel coordinates of the target object in the image, in order to make the first coordinate position and the second coordinate position have comparability, the designated position of the second detection frame and the designated position of the first detection frame are set to be the same position, for example, the designated positions are all center points of the detection frames.
In the embodiment, the pixel coordinates of the target object at the designated position of the second detection frame in the second history frame image are determined to be the second coordinate positions, so that the comparability of the second coordinate positions is ensured, the first coordinate positions and the second coordinate positions can be made to be comparability by determining the same designated positions, and the accuracy of the visual field dividing line determined according to the first coordinate positions and the second coordinate positions is further enabled to be higher.
Fig. 7 shows a schematic flow chart of the embodiment shown in fig. 1, which is further included before the step of fitting the view dividing line in the current view of the image acquisition device according to the at least one first coordinate position and the at least one second coordinate position.
As shown in fig. 7, in an exemplary embodiment of the present application based on the embodiment shown in fig. 1, before the step of fitting the view dividing line in the current view where the image capturing device is located in step 104, the method specifically may further include the following steps:
step 107, if there is a third history frame image meeting a third preset condition in at least one frame of history frame images, updating a second coordinate position corresponding to the target object in the third history frame image by using a third coordinate position of the target object in the third history frame image.
In an embodiment, the third preset condition is that a linear distance between a corresponding coordinate position of the target object in the history frame image and a first coordinate position of the target object is greater than a linear distance between the first coordinate position of the target object and a corresponding second coordinate position, and when it is determined that the linear distance between the coordinate position of the target object in a certain history frame image and the corresponding first coordinate position is greater than the linear distance between the first coordinate position of the target object and the corresponding second coordinate position, the history frame image is a third history frame image meeting the third preset condition, and the second coordinate position of the target object is updated by using the third coordinate position of the target object in the third history frame image, so that the finally obtained second coordinate position is the maximum moving range of the target object in the current field of view of the image acquisition device. Specifically, if the third historical frame image meeting the third preset condition does not exist, fitting is directly carried out according to the obtained at least one first coordinate position and the obtained at least one second coordinate position, and a view dividing line in the current view of the image acquisition device is obtained.
It should be noted that, if the target object moves in the same direction as the initial movement direction when entering the current field of view of the image acquisition device until leaving the current field of view of the image acquisition device after entering the current field of view of the image acquisition device, no third history frame image exists, that is, the second coordinate position determined in the second history frame image is the coordinate position with the farthest linear distance from the first coordinate position of the target object entering the current field of view of the image acquisition device; however, when the target object has a motion direction different from the initial motion direction after entering the current field of view of the image acquisition device, a third historical image may exist at this time, and the second coordinate position needs to be updated by using the third coordinate position.
In one possible implementation manner, when the target object enters the current field of view of the image acquisition device, a first coordinate position corresponding to the target object is recorded, then a third coordinate position with the farthest linear distance from the first coordinate position is continuously recorded and updated in the process that the target object moves in the current field of view of the image acquisition device until a second coordinate position is determined, and if the recorded third coordinate position does not correspond to the second coordinate position determined in the second historical frame image, the second coordinate position is updated by using the third coordinate position.
According to the embodiment, whether the third historical frame image exists or not is judged through the third preset condition, if yes, the second coordinate position is updated by using the third coordinate position of the target object in the third historical frame image, so that the maximum moving range of the target object in the current field of view of the image acquisition device is acquired according to the first coordinate position and the updated second coordinate position, and the accuracy of the field dividing line acquired according to the first coordinate position and the second coordinate position is higher.
Fig. 8 is a flowchart illustrating a step of fitting a view dividing line in a current view of the image capturing device according to at least one first coordinate position and at least one second coordinate position in the embodiment shown in fig. 1.
As shown in fig. 8, in an exemplary embodiment of the present application based on the embodiment shown in fig. 1, the step of fitting the view dividing line in the current view where the image capturing device is located in step 104 may specifically include the following steps:
step 1041: and determining a third coordinate position corresponding to the center point between the at least one first coordinate position and the at least one second coordinate position respectively to obtain at least one third coordinate position.
In an embodiment, the third coordinate position is a center point between the first coordinate position and the corresponding second coordinate position of the target object.
Step 1042: and fitting a visual field dividing line in the current visual field of the image acquisition device according to at least one third coordinate position.
In an embodiment, after the third coordinate position is obtained, fitting is performed by using the third coordinate position, and a view dividing line of the image acquisition device in the current view is obtained.
According to the embodiment, the third coordinate position is determined as the midpoint coordinate of the first coordinate position and the second coordinate position, fitting is carried out according to each third coordinate position, the view dividing line of the image acquisition device in the current view is obtained, manual participation is not needed in the process of determining the view dividing line, errors caused by human factors are avoided, and the accuracy of the obtained view dividing line is high.
Fig. 9 is a flowchart illustrating a step of fitting a view dividing line in a current view of the image capturing device according to at least one third coordinate position in the embodiment shown in fig. 8.
As shown in fig. 9, in an exemplary embodiment of the present application based on the embodiment shown in fig. 8, the fitting of the field dividing line in the current field of view where the image capturing device is located in step 1042 may specifically include the following steps:
step 10421: and judging whether the number of the at least one third coordinate position meets a second preset condition or not.
In an embodiment, the second preset condition is greater than or equal to a preset threshold, for example greater than or equal to 200.
Step 10422: if the number of the at least one third coordinate position meets the second preset condition, fitting a view dividing line in the current view of the image acquisition device according to the at least one third coordinate position.
In an embodiment, when the number of the third coordinate positions meets the second preset condition, it indicates that the number of the third coordinate positions is enough, and then fitting can be performed on at least one third coordinate position to obtain a view dividing line in the current view range, so that a certain accuracy of the obtained view dividing line can be ensured.
In this embodiment, the number of third coordinate positions for performing the view dividing line fitting is limited by the second preset condition, and the accuracy of the view dividing line obtained by using the third coordinate position can be ensured to be higher only when the number of the third coordinate positions meets the second preset condition.
In an exemplary embodiment of the present application, when a current field of view of the image capturing device is large or there is a significant difference between different areas, the current field of view of the image capturing device is divided in advance, different field of view areas are determined, and a field of view dividing line of the field of view area is obtained for each field of view area by using the method described in any of the embodiments above. The first coordinate position of the target object in a certain visual field area entering the visual field area and the second coordinate position of the target object leaving the visual field area are determined, and then fitting is carried out according to the first coordinate position and the second coordinate position to obtain a visual field dividing line of the visual field area. For example, as shown in fig. 10, the current field of view of the image capturing device is divided into 3 field of view regions, and a third coordinate position corresponding to a center point of a first coordinate position of a target object in the field of view region entering the region and a second coordinate position of the target object leaving the region is fitted to determine 3 field of view dividing lines, where the form of the determined field of view dividing lines is not limited in this embodiment, and may be a curve, a straight line or a broken line.
Exemplary apparatus
Based on the same conception as the embodiment of the method, the embodiment of the application also provides a device for determining the visual field dividing line.
Fig. 11 is a schematic structural diagram of a device for determining a view dividing line according to an exemplary embodiment of the present application.
As shown in fig. 11, a device for determining a view dividing line according to an exemplary embodiment of the present application includes:
a first determining module 111, configured to determine at least one target object that meets a first preset condition in at least one frame of history frame image that precedes the current frame image acquired by the image acquisition device;
a second determining module 112, configured to determine first coordinate positions of at least one target object in corresponding first historical frame images in at least one historical frame image, so as to obtain at least one first coordinate position;
a third determining module 113, configured to determine second coordinate positions of at least one target object in a second history frame image corresponding to the at least one history frame image, to obtain at least one second coordinate position;
the parting line fitting module 114 is configured to fit a view parting line in a current view of the image capturing device according to the at least one first coordinate position and the at least one second coordinate position.
As shown in fig. 12, in an exemplary embodiment of the present application, the determining apparatus of the view dividing line further includes a tracking code determining module 115 and a history frame image determining module 116;
a tracking code determining module 115, configured to obtain a tracking code of at least one target object, to obtain at least one tracking code;
the historical frame image determining module 116 is configured to determine, according to at least one tracking code, a first historical frame image and a second historical frame image, which are respectively corresponding to each of the at least one target object in the at least one frame of historical frame image.
As shown in fig. 13, in an exemplary embodiment of the present application, the second determining module 112 includes:
a first determining unit 1121, configured to determine first detection frames of first history frame images corresponding to at least one target object in at least one history frame image, to obtain at least one first detection frame;
the second determining unit 1122 is configured to determine first coordinate positions of each of the at least one first detection frame designated position in the first history frame image, to obtain at least one first coordinate position.
As shown in fig. 13, in an exemplary embodiment of the present application, the third determining module 113 includes:
a third determining unit 1131, configured to determine second detection frames in the second history frame images corresponding to the at least one target object in the at least one history frame image, to obtain at least one second detection frame;
the fourth determination unit 1132 determines the second coordinate positions of the at least one second detection frame designated position in the second history frame image, respectively, to obtain at least one second coordinate position.
As shown in fig. 14, in an exemplary embodiment of the present application, the determining apparatus of the view dividing line further includes a coordinate updating module 117, configured to update, if a third history frame image meeting a third preset condition exists in at least one frame of history frame images, a second coordinate position corresponding to the target object in the third history frame image with a third coordinate position of the target object in the third history frame image.
As shown in fig. 15, in an exemplary embodiment of the present application, the split line fitting module 114 includes:
a fifth determining unit 1141, configured to determine third coordinate positions corresponding to the center points between the at least one first coordinate position and the at least one second coordinate position, to obtain at least one third coordinate position;
and a parting line fitting unit 1142, configured to fit a view parting line in the current view of the image capturing device according to at least one third coordinate position.
As shown in fig. 16, in an exemplary embodiment of the present application, the parting line fitting unit 1142 includes:
a judging subunit 11421, configured to judge whether the number of at least one third coordinate position meets a second preset condition;
and the parting line fitting subunit 11422 is configured to fit, if the number of the at least one third coordinate position meets the second preset condition, a field parting line in the current field of view where the image capturing device is located according to the at least one third coordinate position.
Exemplary electronic device
Fig. 17 illustrates a block diagram of an electronic device according to an embodiment of the application.
As shown in fig. 17, the electronic device 100 includes one or more processors 101 and memory 102.
The processor 101 may be a Central Processing Unit (CPU) or other form of processing unit having data processing and/or instruction execution capabilities and may control other components in the electronic device 100 to perform desired functions.
Memory 102 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, random Access Memory (RAM) and/or cache memory (cache), and the like. The non-volatile memory may include, for example, read Only Memory (ROM), hard disk, flash memory, and the like. One or more computer program instructions may be stored on the computer readable storage medium that can be executed by the processor 101 to implement the method of determining a line of sight split and/or other desired functions of the various embodiments of the application described above.
In one example, the electronic device 100 may further include: an input device 103 and an output device 104, which are interconnected by a bus system and/or other forms of connection mechanisms (not shown).
Of course, only some of the components of the electronic device 100 relevant to the present application are shown in fig. 17 for simplicity, components such as buses, input/output interfaces, and the like being omitted. In addition, the electronic device 100 may include any other suitable components depending on the particular application.
Exemplary computer program product and computer readable storage Medium
In addition to the methods and apparatus described above, embodiments of the application may also be a computer program product comprising computer program instructions which, when executed by a processor, cause the processor to perform the steps in the method of determining a line of sight split according to various embodiments of the application described in the "exemplary methods" section of this specification.
The computer program product may write program code for performing operations of embodiments of the present application in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present application may also be a computer-readable storage medium, having stored thereon computer program instructions, which when executed by a processor, cause the processor to perform the steps in the method of determining a line of sight split according to various embodiments of the present application described in the "exemplary method" section above in this specification.
The computer readable storage medium may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium may include, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The basic principles of the present application have been described above in connection with specific embodiments, however, it should be noted that the advantages, benefits, effects, etc. mentioned in the present application are merely examples and not intended to be limiting, and these advantages, benefits, effects, etc. are not to be considered as essential to the various embodiments of the present application. Furthermore, the specific details disclosed herein are for purposes of illustration and understanding only, and are not intended to be limiting, as the application is not necessarily limited to practice with the above described specific details.
The block diagrams of the devices, apparatuses, devices, systems referred to in the present application are only illustrative examples and are not intended to require or imply that the connections, arrangements, configurations must be made in the manner shown in the block diagrams. As will be appreciated by one of skill in the art, the devices, apparatuses, devices, systems may be connected, arranged, configured in any manner. Words such as "including," "comprising," "having," and the like are words of openness and mean "including but not limited to," and are used interchangeably therewith. The terms "or" and "as used herein refer to and are used interchangeably with the term" and/or "unless the context clearly indicates otherwise. The term "such as" as used herein refers to, and is used interchangeably with, the phrase "such as, but not limited to.
It is also noted that in the apparatus, devices and methods of the present application, the components or steps may be disassembled and/or assembled. Such decomposition and/or recombination should be considered as equivalent aspects of the present application.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present application. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the application. Thus, the present application is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, this description is not intended to limit embodiments of the application to the form disclosed herein. Although a number of example aspects and embodiments have been discussed above, a person of ordinary skill in the art will recognize certain variations, modifications, alterations, additions, and subcombinations thereof.

Claims (10)

1. A method for determining a visual field dividing line includes:
at least one target object meeting a first preset condition is determined in at least one frame of historical frame image before the current frame image acquired by the image acquisition device;
determining first coordinate positions of the at least one target object in corresponding first historical frame images in the at least one historical frame image respectively to obtain at least one first coordinate position;
determining second coordinate positions of the at least one target object in a second historical frame image corresponding to the at least one historical frame image respectively to obtain at least one second coordinate position;
fitting a view dividing line in the current view of the image acquisition device according to the at least one first coordinate position and the at least one second coordinate position.
2. The method of claim 1, wherein said fitting a field of view split line within a current field of view in which the image acquisition device is located comprises:
determining a third coordinate position corresponding to a center point between the at least one first coordinate position and the at least one second coordinate position corresponding to each other, and obtaining at least one third coordinate position;
and fitting a visual field dividing line in the current visual field of the image acquisition device according to the at least one third coordinate position.
3. The method of claim 2, wherein fitting a field of view split line within a current field of view in which the image acquisition device is located comprises:
judging whether the number of the at least one third coordinate position accords with a second preset condition or not;
and if the number of the at least one third coordinate position accords with the second preset condition, fitting a view dividing line in the current view of the image acquisition device according to the at least one third coordinate position.
4. The method of claim 1, wherein prior to the step of fitting a field of view split line within a current field of view in which the image acquisition device is located, further comprising:
if a third historical frame image meeting a third preset condition exists in the at least one historical frame image, a third coordinate position of the target object in the third historical frame image is utilized to update a second coordinate position corresponding to the target object in the third historical frame image.
5. The method of claim 1, wherein the determining the first coordinate position of each of the at least one target object in the corresponding first historical frame image in the at least one historical frame image, results in at least one first coordinate position, comprises:
determining first detection frames of corresponding first historical frame images of the at least one target object in the at least one historical frame image respectively to obtain at least one first detection frame;
and determining first coordinate positions of the designated positions of the at least one first detection frame in the first historical frame image respectively to obtain at least one first coordinate position.
6. The method of claim 5, wherein the determining the second coordinate position of each of the at least one target object in the corresponding second historical frame image in the at least one historical frame image, results in at least one second coordinate position, comprises:
determining second detection frames in corresponding second historical frame images of the at least one target object in the at least one historical frame image respectively to obtain at least one second detection frame;
and determining second coordinate positions of the designated positions of the at least one second detection frame in the second historical frame image respectively to obtain at least one second coordinate position.
7. The method of any of claims 1-6, wherein prior to the step of obtaining at least one first coordinate location, further comprising:
acquiring the tracking code of the at least one target object to obtain at least one tracking code;
and according to the at least one tracking code, determining a first historical frame image and a second historical frame image which are respectively corresponding to the at least one target object in the at least one frame historical frame image.
8. A visual field dividing line determining apparatus comprising:
the first determining module is used for determining at least one target object meeting a first preset condition in at least one frame of historical frame image before the current frame image acquired by the image acquisition device;
the second determining module is used for determining first coordinate positions of the at least one target object in a corresponding first historical frame image in the at least one historical frame image respectively to obtain at least one first coordinate position;
a third determining module, configured to determine second coordinate positions of the at least one target object in a second history frame image corresponding to the at least one history frame image, to obtain at least one second coordinate position;
and the parting line fitting module is used for fitting a visual field parting line in the current visual field of the image acquisition device according to the at least one first coordinate position and the at least one second coordinate position.
9. A computer-readable storage medium storing a computer program for executing the method of determining a visual field dividing line according to any one of the preceding claims 1 to 7.
10. An electronic device, the electronic device comprising:
a processor;
a memory for storing the processor-executable instructions;
the processor is configured to read the executable instructions from the memory and execute the executable instructions to implement the method for determining a view segmentation line according to any one of claims 1-7.
CN201911014273.0A 2019-10-23 2019-10-23 Method and device for determining visual field dividing line Active CN112700491B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911014273.0A CN112700491B (en) 2019-10-23 2019-10-23 Method and device for determining visual field dividing line

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911014273.0A CN112700491B (en) 2019-10-23 2019-10-23 Method and device for determining visual field dividing line

Publications (2)

Publication Number Publication Date
CN112700491A CN112700491A (en) 2021-04-23
CN112700491B true CN112700491B (en) 2023-08-29

Family

ID=75505387

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911014273.0A Active CN112700491B (en) 2019-10-23 2019-10-23 Method and device for determining visual field dividing line

Country Status (1)

Country Link
CN (1) CN112700491B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5583494A (en) * 1991-06-13 1996-12-10 Mitsubishi Denki Kabushiki Kaisha Traffic information display system
CN103150548A (en) * 2013-01-31 2013-06-12 南京吉目希自动化科技有限公司 Method for improving machine vision system identification accuracy
CN104751486A (en) * 2015-03-20 2015-07-01 安徽大学 Moving object relay tracing algorithm of multiple PTZ (pan/tilt/zoom) cameras
CN108446585A (en) * 2018-01-31 2018-08-24 深圳市阿西莫夫科技有限公司 Method for tracking target, device, computer equipment and storage medium
CN108491857A (en) * 2018-02-11 2018-09-04 中国矿业大学 A kind of multiple-camera target matching method of ken overlapping

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5583494A (en) * 1991-06-13 1996-12-10 Mitsubishi Denki Kabushiki Kaisha Traffic information display system
CN103150548A (en) * 2013-01-31 2013-06-12 南京吉目希自动化科技有限公司 Method for improving machine vision system identification accuracy
CN104751486A (en) * 2015-03-20 2015-07-01 安徽大学 Moving object relay tracing algorithm of multiple PTZ (pan/tilt/zoom) cameras
CN108446585A (en) * 2018-01-31 2018-08-24 深圳市阿西莫夫科技有限公司 Method for tracking target, device, computer equipment and storage medium
CN108491857A (en) * 2018-02-11 2018-09-04 中国矿业大学 A kind of multiple-camera target matching method of ken overlapping

Also Published As

Publication number Publication date
CN112700491A (en) 2021-04-23

Similar Documents

Publication Publication Date Title
US11422261B2 (en) Robot relocalization method and apparatus and robot using the same
CN106952303B (en) Vehicle distance detection method, device and system
CN111145214A (en) Target tracking method, device, terminal equipment and medium
US20150125041A1 (en) Reinforcement learning approach to character level segmentation of license plate images
CN110705405A (en) Target labeling method and device
US11688078B2 (en) Video object detection
CN111814746A (en) Method, device, equipment and storage medium for identifying lane line
CN112078571B (en) Automatic parking method, automatic parking equipment, storage medium and automatic parking device
US11037301B2 (en) Target object detection method, readable storage medium, and electronic device
JP2016206995A (en) Image processing apparatus, image processing method, and program
US20160313799A1 (en) Method and apparatus for identifying operation event
CN112507852A (en) Lane line identification method, device, equipment and storage medium
CN109740442B (en) Positioning method, positioning device, storage medium and electronic equipment
CN109413470B (en) Method for determining image frame to be detected and terminal equipment
CN112700491B (en) Method and device for determining visual field dividing line
CN114584836B (en) Method, device, system and medium for detecting using behavior of electronic product
CN112150529B (en) Depth information determination method and device for image feature points
CN113505700A (en) Image processing method, device, equipment and storage medium
CN110933314B (en) Focus-following shooting method and related product
CN114565952A (en) Pedestrian trajectory generation method, device, equipment and storage medium
WO2018142916A1 (en) Image processing device, image processing method, and image processing program
CN111212239B (en) Exposure time length adjusting method and device, electronic equipment and storage medium
CN112380938B (en) Face recognition and temperature measurement method, device, equipment and medium
CN110807403B (en) User identity identification method and device and electronic equipment
CN112561836B (en) Method and device for acquiring point cloud set of target object

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant