CN114845055A - Method and device for determining shooting parameters of image acquisition equipment and electronic equipment - Google Patents

Method and device for determining shooting parameters of image acquisition equipment and electronic equipment Download PDF

Info

Publication number
CN114845055A
CN114845055A CN202210457316.8A CN202210457316A CN114845055A CN 114845055 A CN114845055 A CN 114845055A CN 202210457316 A CN202210457316 A CN 202210457316A CN 114845055 A CN114845055 A CN 114845055A
Authority
CN
China
Prior art keywords
target object
determining
body size
image
shooting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210457316.8A
Other languages
Chinese (zh)
Other versions
CN114845055B (en
Inventor
刘诗男
杨昆霖
侯军
伊帅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensetime Technology Development Co Ltd filed Critical Beijing Sensetime Technology Development Co Ltd
Priority to CN202210457316.8A priority Critical patent/CN114845055B/en
Publication of CN114845055A publication Critical patent/CN114845055A/en
Application granted granted Critical
Publication of CN114845055B publication Critical patent/CN114845055B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure relates to a method and a device for determining shooting parameters of image acquisition equipment and electronic equipment, wherein the determination method comprises the following steps: acquiring at least one image to be detected; determining the position of a head central point corresponding to at least one target object in the at least one image to be detected and a first body size corresponding to the at least one target object; determining a second body size corresponding to the at least one target object according to the position of the head central point corresponding to the at least one target object; wherein the measurement direction and measurement unit of the first body size and the second body size are the same, and the second body size is related to the shooting parameter; and determining shooting parameters of the image acquisition equipment according to the first body size and the second body size corresponding to the at least one target object. The embodiment of the disclosure can automatically determine the shooting parameters, and has low labor cost and high precision.

Description

Method and device for determining shooting parameters of image acquisition equipment and electronic equipment
Technical Field
The present disclosure relates to the field of data processing technologies, and in particular, to a method and an apparatus for determining shooting parameters of an image capture device, and an electronic device.
Background
With the development of related technologies such as intelligent security and augmented reality, the human head detection technology is gradually paid attention by developers. Human head detection techniques can support upper layer tasks related thereto, such as: tasks such as pedestrian counting and pedestrian distance estimation in smart cities and smart security scenes, or human posture reconstruction tasks in augmented reality scenes. The real-time human head detection technology needs the support of image acquisition equipment, wherein the shooting parameters of the image acquisition equipment directly influence the precision of the human head detection technology, and under the condition of missing shooting parameters, the head detection technology is easy to have abnormal operation. Therefore, how to determine the shooting parameters of the image acquisition equipment is a technical problem which needs to be solved urgently by developers.
Disclosure of Invention
The disclosure provides a technical scheme for determining shooting parameters.
According to an aspect of the present disclosure, there is provided a method of determining shooting parameters of an image capturing apparatus, the method comprising: acquiring at least one image to be detected; the image to be detected is acquired by the image acquisition equipment; determining the position of a head central point corresponding to at least one target object in the at least one image to be detected and a first body size corresponding to the at least one target object; determining a second body size corresponding to the at least one target object according to the position of the head center point corresponding to the at least one target object; wherein the first body size, the second body size and the measurement direction are the same, and the measurement unit is the same, and the second body size is related to the shooting parameter; and determining shooting parameters of the image acquisition equipment according to the first body size and the second body size corresponding to the at least one target object.
In a possible embodiment, the determining a second body size corresponding to the at least one target object according to the position of the head center point corresponding to the at least one target object includes: determining pixel coordinates of two vertexes of a body area of the at least one target object according to the pixel coordinates of the head central point corresponding to the at least one target object; the pixel coordinates of the head central point are used for representing the position of the head central point corresponding to the at least one target object in the at least one image to be detected; the pixel coordinates of the two vertices of the body region differ in value in a direction of measurement of a first body dimension; and determining a second body size corresponding to the at least one target object according to the pixel coordinates of the two vertexes of the body area of the at least one target object.
In a possible embodiment, the determining pixel coordinates of two vertices of the body region of the at least one target object according to the pixel coordinates of the head center point corresponding to the at least one target object includes: determining world coordinates of the head central point corresponding to the at least one target object according to pixel coordinates of the head central point corresponding to the at least one target object and a preset conversion relation, wherein the preset conversion relation is determined according to the shooting parameters; determining world coordinates of two vertexes of a body area of the at least one target object according to the world coordinates of the head central point corresponding to the at least one target object and preset body parameters; and determining the pixel coordinates of the two vertexes of the body area of the at least one target object according to the preset conversion relation and the world coordinates of the two vertexes of the body area of the at least one target object.
In a possible embodiment, the shooting parameters of the image acquisition device include: at least one of installation height, shooting focal length and shooting angle; the preset conversion relation is related to the shooting focal length and the shooting angle; the world coordinates of the head center point are related to the mounting height.
In a possible embodiment, the determining a second body size corresponding to the at least one target object according to the pixel coordinates of the two vertices of the body region of the at least one target object includes: determining pixel coordinates of two vertices of a body region of the at least one target object, an offset in a measurement direction of a first body dimension; and taking the offset as a second body size corresponding to the at least one target object under the condition that the offset is larger than a preset offset.
In a possible implementation manner, before the determining the shooting parameters according to the first body size and the second body size corresponding to the at least one target object, the determining method includes: determining a definition score corresponding to the at least one target object according to the size relation between the first body size corresponding to the at least one target object and a preset size; wherein the sharpness score is inversely related to a difference in size between the first body dimension and a predetermined dimension; screening out at least one first object in all the target objects according to the definition score corresponding to the at least one target object, and taking the first object as a new target object; determining the shooting parameters according to a first body size and a second body size corresponding to the at least one target object, including: and determining the shooting parameters according to the first body size and the second body size corresponding to the new target object.
In a possible implementation manner, the determining the shooting parameters according to the first body size and the second body size corresponding to the at least one target object includes: determining at least two first parameters according to the first body sizes corresponding to the at least two target objects and the corresponding second body sizes; wherein the different first parameters are all shooting parameters of the same type; and fusing the at least two first parameters, and taking a fusion result as the shooting parameter.
In a possible embodiment, the determining the position of the head center point corresponding to at least one target object in the at least one image to be detected and the first body size corresponding to the at least one target object includes at least one of: determining the position of a head central point corresponding to at least one target object in at least one image to be detected through a preset head central point detection model; determining a detection frame corresponding to the at least one target object through a preset body detection model; determining the first body size based on a detection frame corresponding to the at least one target object.
According to an aspect of the present disclosure, there is provided a determination apparatus of shooting parameters of an image capturing device, the determination apparatus including: the image acquisition module is used for acquiring at least one image to be detected; the image to be detected is acquired by the image acquisition equipment; a first body size determining module, configured to determine a position of a head center point corresponding to at least one target object in the at least one image to be detected, and a first body size corresponding to the at least one target object; the second body size determining module is used for determining a second body size corresponding to the at least one target object according to the position of the head central point corresponding to the at least one target object; wherein the first body size and the second body size are measured in the same direction; and the shooting parameter determining module is used for determining the shooting parameters according to the first body size and the second body size corresponding to the at least one target object.
According to an aspect of the present disclosure, there is provided an electronic device including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the memory-stored instructions to perform the above-described method.
According to an aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described method.
In the embodiment of the disclosure, at least one image to be detected may be acquired, then a position of a head center point corresponding to at least one target object in the at least one image to be detected and a first body size corresponding to the at least one target object may be determined, and then a second body size corresponding to the at least one target object may be determined according to the position of the head center point corresponding to the at least one target object, where the second body size is related to a shooting parameter of an image acquisition device, and finally the shooting parameter may be determined according to the first body size corresponding to the at least one target object and the corresponding second body size. The embodiment of the disclosure can automatically determine the shooting parameters of the image acquisition equipment based on the image to be detected, has lower labor cost, has higher precision of the determined shooting parameters based on the image to be detected, and is favorable for improving the precision of the subsequent head detection function.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure. Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure.
Fig. 1 shows a flowchart of a method for determining shooting parameters of an image capturing device according to an embodiment of the present disclosure.
Fig. 2 shows a flowchart of a method for determining shooting parameters of an image capturing device according to an embodiment of the present disclosure.
Fig. 3 shows a block diagram of a determination apparatus for shooting parameters of an image capturing device according to an embodiment of the present disclosure.
Fig. 4 shows a block diagram of an electronic device provided in accordance with an embodiment of the present disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
The shooting parameters of the image capturing apparatus in the related art are generally determined as follows: the image acquisition equipment is installed to the target position by an installer so that the installer can shoot an image to be detected of the area to be detected. And then the installer manually determines the missing shooting parameters of the image acquisition equipment. This tends to cause the following problems: 1. the manually determined error is large, and the precision of the subsequent head detection function is easily influenced. 2. The labor cost is high.
In view of this, an embodiment of the present disclosure provides a method for determining shooting parameters of an image capturing device, where the method for determining shooting parameters includes: the method comprises the steps of obtaining at least one image to be detected, then determining the position of a head central point corresponding to at least one target object in the at least one image to be detected and the first body size corresponding to the at least one target object, and then determining the second body size corresponding to the at least one target object according to the position of the head central point corresponding to the at least one target object, wherein the second body size is related to shooting parameters of image acquisition equipment. And finally, determining the shooting parameters according to the first body size and the second body size corresponding to the at least one target object. The embodiment of the disclosure can automatically determine the shooting parameters of the image acquisition equipment based on the image to be detected, has lower labor cost, has higher precision of the determined shooting parameters based on the image to be detected, and is favorable for improving the precision of the subsequent head detection function.
In a possible implementation manner, the determining method may be performed by an electronic device such as a terminal device or a server, the terminal device may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, an in-vehicle device, a wearable device, or the like, and the method may be implemented by a processor calling a computer readable instruction stored in a memory. Alternatively, the method may be performed by a server.
Referring to fig. 1, fig. 1 is a flowchart illustrating a method for determining shooting parameters of an image capturing apparatus according to an embodiment of the present disclosure. As shown in fig. 1, the determining method includes:
step S100, at least one image to be detected is obtained. And acquiring the image to be detected by the image acquisition equipment. Illustratively, the electronic device may be connected with the image capturing device in a wired or wireless manner, so that the electronic device may obtain the image to be detected through the image capturing device. The above image pickup apparatus may include: the system comprises a visible light camera, a multi-view camera and the like, and developers can flexibly set the cameras according to actual needs. In one embodiment, the image to be detected is an original image acquired by the image acquisition equipment; in other embodiments, the image to be detected may be a processed image, for example, an image processed by enhancement processing, screening, or the like, or an image obtained by splicing a plurality of image acquisition devices, and is not limited in this respect.
Step S200, determining the position of the head center point corresponding to at least one target object in the at least one image to be detected and the first body size corresponding to the at least one target object. The target object may be any object that the developer wants to detect, such as: pedestrians, animals, specific persons, specific animals, etc., embodiments of the present disclosure are not limited thereto. The first body size may be a body height or a body width of the target subject.
In a possible embodiment, the position of the center point of the head may be obtained by a trained machine learning model. For example, the determining the position of the head center point corresponding to the at least one target object in the at least one image to be detected may include: and determining the position of the head central point corresponding to at least one target object in the at least one image to be detected through a preset head central point detection model. For example: the training objects in the training images may be labeled manually. The training object and the target object have the same object type, for example, the object type of the training object and the target object is the same as that of a pedestrian, an animal, a specific person and the like, so that the trained machine learning model can determine the target object in the image to be detected, and can be used as the preset head center point detection model. For example, the head center point of the training object in the training image may be labeled, in other words, the head center point corresponding to the target object may be identified based on the machine learning model trained by the head center point of the training object. For example: after the machine learning model training is completed, the image to be detected can be input into the training device to obtain the position information corresponding to the head central point corresponding to the target object in the image to be detected. In one example, the position information of the head center point may be expressed as pixel coordinates in a pixel coordinate system in the related art. The embodiment of the disclosure can allow the machine learning model to output the position of the head center point, and compared with the machine learning model directly outputting the head region in the related art, the embodiment of the disclosure has lower manual labeling cost and smaller manual labeling error. In other words, in the labeling stage of the training image, a developer only needs to label the head center point of the training object, and compared with labeling a complete head region, the manual labeling cost of the embodiment of the disclosure is lower. In addition, compared with the condition that the labeling frames are overlapped in the related art, the visual effect of the center point of the labeling head is better, namely the distance between the labeling points is larger under the normal condition, so that even under the condition that the training objects in the training image are too many, developers can finish labeling with higher recognition degree, and the detection effect of the machine learning model trained based on the training image is favorably improved.
In one possible embodiment, the first body size may be determined by another trained machine learning model to determine body frame data (e.g., a frame vertex position, a frame height, a frame width, etc.) corresponding to the target object, and the height or width corresponding to the body frame data may be used as the first body size corresponding to the target object. In other words, the first body size may be directly output by the machine learning model. That is, the first body size is the body size estimated by the machine learning model. For example, the determining the first body size corresponding to the at least one target object may include: determining a detection frame corresponding to the at least one target object through a preset body detection model, and then determining the first body size based on the detection frame corresponding to the at least one target object. For example: the machine learning model can be obtained by training a training image labeled with a body frame (i.e., the detection frame), so that the trained machine learning model can output body frame data corresponding to a target object in an image to be detected, and the trained machine learning model can be used as the preset body detection model.
Step S300, determining a second body size corresponding to the at least one target object according to the position of the head central point corresponding to the at least one target object. The measurement directions and measurement units of the first body size and the second body size are the same and are the sizes of the first body size and the second body size in a pixel coordinate system in the image to be detected; the second body size is related to the photographing parameter. For example, the first body size and the second body size may be the same one of a body height and a body width, the first body size may be directly determined by a trained machine learning model, and the second body size may be determined by combining the position of the head center point and a calibration transformation relationship (corresponding shooting parameters) of the image capturing device, and the embodiment of the disclosure is not limited herein. Although the first body size and the second body size are generated in different manners, both the first body size and the second body size are the body size of the same target object, so that the first body size and the second body size can be considered to be equal under ideal conditions (for example, the accuracy of the trained machine learning model is higher than a threshold value, which can be flexibly set by a developer), and in the actual application process, the developer can add an error value to one of the first body size and the second body size according to the actual application condition of the image acquisition device to form an equation related to the first body size and the second body size. Then, based on the equation relating the two, the shooting parameters can be determined, which will be described in detail later in conjunction with the equation.
Referring to fig. 2, fig. 2 is a flowchart illustrating a method for determining shooting parameters of an image capturing apparatus according to an embodiment of the present disclosure. In one possible implementation, step S300 may include:
step S310, determining pixel coordinates of two vertices of the body region of the at least one target object according to the pixel coordinates of the head center point corresponding to the at least one target object. The pixel coordinates of the head center point are used for representing the position of the head center point corresponding to the at least one target object in the at least one image to be detected. The pixel coordinates of the two vertices of the body region differ in value in the direction of measurement of the first body dimension. For example, if the first body size is a body height, a height difference may exist between two vertices of the body area, and if the first body size is a body width, a width difference may exist between two vertices of the body area. For example: if the first body size is body height, then two vertices of the body area are selected: the vertex of the upper left corner and the vertex of the lower left corner of the body area, or the vertex of the upper right corner and the vertex of the lower left corner, and the like.
In one possible implementation, step S310 may include: and determining the world coordinate of the head central point corresponding to the at least one target object according to the pixel coordinate of the head central point corresponding to the at least one target object and a preset conversion relation. And the preset conversion relation is determined according to the shooting parameters. Illustratively, the preset conversion relationship is used for performing mutual conversion between the pixel coordinate and the world coordinate of any spatial point in the image to be detected. The preset conversion relation is related to a shooting focal length and a shooting angle in the shooting parameters. For example: the rotation matrix R in the related art can be represented by the following formula:
Figure BDA0003619225480000061
where θ may be used to represent a shooting angle in the shooting parameters.
On this basis, according to the imaging model of the image capturing apparatus in the related art, the correspondence between the world coordinates (X, Y, Z) and the pixel coordinates (u, v,1) (here, expressed as homogeneous coordinates) of any one spatial point in the world coordinate system can be obtained:
Figure BDA0003619225480000062
wherein Z is The value of Z in the world coordinates (X, Y, Z) representing the spatial point mapped to the camera coordinate system in the related art can be used to represent the depth value in the camera coordinate system. u is used for representing the coordinate information of the space point in the vertical direction in the pixel coordinate system, v is used for representing the coordinate information of the space point in the horizontal direction in the pixel coordinate system,
Figure BDA0003619225480000063
the specific generation mode of the internal reference matrix of the image acquisition device can refer to the related technology. f. of x 、f y The length of the focal length in the x and y directions is expressed (the measurement unit can be the number of pixels). Wherein f is x 、f y The focus distance in millimeter as the measurement unit in the shooting parameter can be converted through the measurement unit conversion relation in the related art. x direction is the vertical direction, y direction is the horizontal direction, c x 、c y To represent the amount of shift in the x, y direction of a principal point in the camera coordinate system (e.g., the vertex in the upper left corner of the image) relative to the principal point in the pixel coordinate system. R T To represent the transpose of R. X, Y, Z is used to represent the world coordinate system of the space pointCoordinate information in the middle vertical direction, the horizontal direction and the depth direction.
The above formula can be simplified as:
Figure BDA0003619225480000071
Figure BDA0003619225480000072
the simplified formula may be used as the predetermined transformation relationship.
Then, the world coordinates of the two vertexes of the body area of the at least one target object can be determined according to the world coordinates of the head center point corresponding to the at least one target object and the preset body parameters. Illustratively, the preset physical parameters may include: at least one of head height, head width, body height, body width, and in one example, body width alone may be selected for calculation with the horizontal axis coordinate (for measuring width) in the world coordinate of the head center point, which in this example may be considered to lie on the vertical medial axis of the body region. In one example, the number of the head size and the body size that can be selected is 1, and the head size and the body size that can be selected include the same size in the same measurement direction. For example: the head height, body height, can be chosen to be calculated with the vertical axis coordinate (to measure height) in the world coordinates of the head center point. For another example: the head width and the body width can be selected to be calculated with a horizontal axis coordinate (for measuring the width) in the world coordinate of the head center point, and a specifically selected body parameter developer can select according to actual needs. For example, if the world coordinates of the head center point corresponding to the target object are represented as: (a) 1 ,b 1 ,z 1 ) The two vertexes are respectively the vertex at the upper left corner of the body and the vertex at the lower right corner of the body, and the world coordinates of the two vertexes can be respectively expressed as
Figure BDA0003619225480000073
Wherein, a 1 To represent the world coordinate of the center point of the head on the vertical axis (to measure height), b 1 To represent the world coordinate of the center point of the head on the horizontal axis (to measure width), z 1 To represent the world coordinates of the head pivot point on the depth axis (to measure depth). M is used to represent the head height in the preset body parameter, M is used to represent the body height in the preset body parameter, and N is used to represent the body width in the preset body parameter.
And finally, determining the pixel coordinates of the two vertexes of the body area of the at least one target object according to the preset conversion relation and the world coordinates of the two vertexes of the body area of the at least one target object. Illustratively, in the above example, if the pixel coordinate of the head center point corresponding to the target object is expressed as (u) A ,v A ) The pixel coordinates of the two vertexes are respectively expressed as
Figure BDA0003619225480000074
Then, according to the above-mentioned predetermined transformation relationship and the world coordinates of the two vertices, the following equation can be determined:
Figure BDA0003619225480000075
Figure BDA0003619225480000081
wherein u is A
Figure BDA0003619225480000082
To measure the pixel coordinates on the vertical axis (to measure height), v A
Figure BDA0003619225480000083
To measure the pixel coordinates on the horizontal axis (to measure the width)。
Z is above 1 Can be expressed as:
Figure BDA0003619225480000084
in one example, the world coordinates of the head center point are related to the installation height, in other words, the following equation can be obtained from the position relationship among the camera, the head and the body in the actual scene:
Figure BDA0003619225480000085
where h is used to represent the mounting height in the shooting parameters.
With continued reference to fig. 2, in step S320, a second body size corresponding to the at least one target object is determined according to the pixel coordinates of the two vertices of the body region of the at least one target object. Illustratively, the second body dimension Δ U A1 Can be expressed by the following formula:
Figure BDA0003619225480000086
in one possible implementation, step S320 may include: determining pixel coordinates of two vertices of a body region of the at least one target object, an offset in a measurement direction of a first body dimension. And taking the offset as a second body size corresponding to the at least one target object under the condition that the offset is larger than a preset offset. Illustratively, here, taking the offset between the top left corner vertex and the bottom right corner vertex as an example, the preset offset may be set to 0. Due to the top left corner vertex
Figure BDA0003619225480000087
Higher than the vertex of the lower right corner
Figure BDA0003619225480000088
So when the two are different, Δ U A1 If the value is less than or equal to 0, it may be determined that an abnormal condition occurs in the detection method, detection of the target object may be abandoned, an abnormal prompt may also be generated, and the like, and the embodiments of the present disclosure are not limited herein.
Continuing to refer to fig. 1, in step S400, determining the shooting parameters of the image capturing apparatus according to the first body size and the second body size corresponding to the at least one target object. Illustratively, the shooting parameters of the image acquisition device may include: at least one of installation height, shooting focal length and shooting angle. For example, the measurement units of the first body size and the second body size are pixel values, one of which can be determined based on the detection result of the machine learning model, and the other of which can be obtained by the transformation relationship between the pixel coordinates of the head center point and the foregoing list.
Illustratively, with continued reference to the above example, if the first body size is expressed as Δ U A2 Then, the shooting parameters may be determined based on the following formula:
Figure BDA0003619225480000089
in combination with this equation, the shooting angle θ in the shooting parameters can be determined, which can be based on f x The unit conversion relationship in the related art determines the shooting focal length (in millimeters) in the shooting parameters (in pixel quantities), which can be based on a 1 、a 1 Relation with mounting height h (see above a) 1 An equation with h), the mounting height h among the shooting parameters is determined. In the case where there are a plurality of unknowns in the shooting parameters, the number of target objects may be increased by an appropriate amount to increase the number of equations. For example: the number of equations may be set to be greater than or equal to the number of unknowns to satisfy the condition of solving the unknowns, and the embodiments of the present disclosure are not limited herein.
In one possible implementation, step S400 may include: and determining at least two first parameters according to the corresponding first body sizes and the corresponding second body sizes of the at least two target objects. Wherein the different first parameters are all the same type of shooting parameters. Illustratively, the at least two first parameters are used to represent the same shooting parameter of the installation height, the shooting focal length and the shooting angle. And then fusing the at least two first parameters, and taking the fusion result as the shooting parameter. For example, the average value of the at least two first parameters or the weighted value (for example, the weighted value may be positively correlated with the sharpness score described below) may be used as the fusion result, that is, the shooting parameter. According to the embodiment of the invention, the shooting parameters of the same type can be fused, so that the accuracy of determining the shooting parameters can be improved.
In a possible implementation manner, before step S400, the determining method may further include:
and determining the definition score corresponding to the at least one target object according to the size relation between the first body size corresponding to the at least one target object and the preset size. Wherein the sharpness score is inversely related to a magnitude difference between the first body dimension and a predetermined dimension. For example, if the first body size is a body height, the predetermined size is also the body height, and the measurement directions of the two are the same. For example: when the target object is close to the preset size, the shooting definition of the target object is high, namely the accuracy of the position of the first body size and the head center point generated based on the image to be detected with the target object is high, and further the determination accuracy of the shooting parameters can be improved. And screening out at least one first object in all the target objects according to the definition score corresponding to the at least one target object, and taking the first object as a new target object. For example, at least one first object having the highest degree of sharpness score may be used as the new target object. In this case, step S400 may include: and determining the shooting parameters according to the first body size and the second body size corresponding to the new target object.
It is understood that the above-mentioned method embodiments of the present disclosure can be combined with each other to form a combined embodiment without departing from the logic of the principle, which is limited by the space, and the detailed description of the present disclosure is omitted. Those skilled in the art will appreciate that in the above methods of the specific embodiments, the specific order of execution of the steps should be determined by their function and possibly their inherent logic.
In addition, the present disclosure also provides a device, an electronic device, a computer-readable storage medium, and a program for determining a shooting parameter of an image capturing device, which can be used to implement any one of the determining methods provided in the present disclosure, and the corresponding technical solutions and descriptions thereof and the corresponding descriptions thereof in the methods section are omitted for brevity.
Fig. 3 shows a block diagram of a determination apparatus of shooting parameters of an image capturing device according to an embodiment of the present disclosure, and as shown in fig. 3, the apparatus 100 includes: the image obtaining module 110 is configured to obtain at least one image to be detected. And the image to be detected is acquired by the image acquisition equipment. A first body size determining module 120, configured to determine a position of a head center point corresponding to at least one target object in the at least one image to be detected and a first body size corresponding to the at least one target object. The second body size determining module 130 is configured to determine a second body size corresponding to the at least one target object according to the position of the head center point corresponding to the at least one target object. Wherein the first body size and the second body size are measured in the same direction. The shooting parameter determining module 140 is configured to determine the shooting parameters according to the first body size and the second body size corresponding to the at least one target object.
In a possible embodiment, the determining a second body size corresponding to the at least one target object according to the position of the head center point corresponding to the at least one target object includes: determining pixel coordinates of two vertexes of a body area of the at least one target object according to the pixel coordinates of the head central point corresponding to the at least one target object; the pixel coordinates of the head central point are used for representing the position of the head central point corresponding to the at least one target object in the at least one image to be detected; the pixel coordinates of the two vertices of the body region differ in value in a direction of measurement of a first body dimension; and determining a second body size corresponding to the at least one target object according to the pixel coordinates of the two vertexes of the body area of the at least one target object.
In a possible embodiment, the determining pixel coordinates of two vertices of the body region of the at least one target object according to the pixel coordinates of the head center point corresponding to the at least one target object includes: determining world coordinates of the head central point corresponding to the at least one target object according to pixel coordinates of the head central point corresponding to the at least one target object and a preset conversion relation, wherein the preset conversion relation is determined according to the shooting parameters; determining world coordinates of two vertexes of a body area of the at least one target object according to the world coordinates of the head central point corresponding to the at least one target object and preset body parameters; and determining the pixel coordinates of the two vertexes of the body area of the at least one target object according to the preset conversion relation and the world coordinates of the two vertexes of the body area of the at least one target object.
In a possible embodiment, the shooting parameters of the image acquisition device include: at least one of installation height, shooting focal length and shooting angle; the preset conversion relation is related to the shooting focal length and the shooting angle; the world coordinates of the head center point are related to the mounting height.
In a possible embodiment, the determining a second body size corresponding to the at least one target object according to the pixel coordinates of the two vertices of the body region of the at least one target object includes: determining pixel coordinates of two vertices of a body region of the at least one target object, an offset in a measurement direction of a first body dimension; and taking the offset as a second body size corresponding to the at least one target object under the condition that the offset is larger than a preset offset.
In one possible embodiment, the determining means comprises: a sharpness score determination unit configured to perform the steps of: determining a definition score corresponding to the at least one target object according to the size relation between the first body size corresponding to the at least one target object and a preset size; wherein the sharpness score is inversely related to a difference in size between the first body dimension and a predetermined dimension; screening out at least one first object in all the target objects according to the definition score corresponding to the at least one target object, and taking the first object as a new target object; determining the shooting parameters according to a first body size and a second body size corresponding to the at least one target object, including: and determining the shooting parameters according to the first body size and the second body size corresponding to the new target object.
In a possible implementation manner, the determining the shooting parameters according to the first body size and the second body size corresponding to the at least one target object includes: determining at least two first parameters according to the first body sizes corresponding to the at least two target objects and the corresponding second body sizes; wherein the different first parameters are all shooting parameters of the same type; and fusing the at least two first parameters, and taking a fusion result as the shooting parameter.
In a possible embodiment, the determining the position of the head center point corresponding to at least one target object in the at least one image to be detected and the first body size corresponding to the at least one target object includes at least one of: determining the position of a head central point corresponding to at least one target object in at least one image to be detected through a preset head central point detection model; determining a detection frame corresponding to the at least one target object through a preset body detection model; determining the first body size based on a detection frame corresponding to the at least one target object.
The method has specific technical relevance with the internal structure of the computer system, and can solve the technical problems of how to improve the hardware operation efficiency or the execution effect (including reducing data storage capacity, reducing data transmission capacity, improving hardware processing speed and the like), thereby obtaining the technical effect of improving the internal performance of the computer system according with the natural law.
In some embodiments, functions of or modules included in the apparatus provided in the embodiments of the present disclosure may be used to execute the method described in the above method embodiments, and for specific implementation, reference may be made to the description of the above method embodiments, and for brevity, details are not described here again.
Embodiments of the present disclosure also provide a computer-readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the above-mentioned method. The computer readable storage medium may be a volatile or non-volatile computer readable storage medium.
An embodiment of the present disclosure further provides an electronic device, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the memory-stored instructions to perform the above-described method.
The disclosed embodiments also provide a computer program product comprising computer readable code or a non-transitory computer readable storage medium carrying computer readable code, which when run in a processor of an electronic device, the processor in the electronic device performs the above method.
The electronic device may be provided as a server or other modality of device.
Fig. 4 illustrates a block diagram of an electronic device 1900 provided in accordance with an embodiment of the disclosure. For example, the electronic device 1900 may be provided as a server or terminal device. Referring to fig. 4, electronic device 1900 includes a processing component 1922 further including one or more processors and memory resources, represented by memory 1932, for storing instructions, e.g., applications, executable by processing component 1922. The application programs stored in memory 1932 may include one or more modules that each correspond to a set of instructions. Further, the processing component 1922 is configured to execute instructions to perform the above-described method.
The electronic device 1900 may also include a power component 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to a network, and an input/output (I/O) interface 1958. The electronic device 1900 may operate based on an operating system, such as the Microsoft Server operating system (Windows Server), stored in the memory 1932 TM ) Apple Inc. of the present application based on the graphic user interface operating System (Mac OS X) TM ) Multi-user, multi-process computer operating system (Unix) TM ) Free and open native code Unix-like operating System (Linux) TM ) Open native code Unix-like operating System (FreeBSD) TM ) Or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium, such as the memory 1932, is also provided that includes computer program instructions executable by the processing component 1922 of the electronic device 1900 to perform the above-described methods.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device over a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The computer program product may be embodied in hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
The foregoing description of the various embodiments is intended to highlight various differences between the embodiments, and the same or similar parts may be referred to each other, and for brevity, will not be described again herein.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
If the technical scheme of the application relates to personal information, a product applying the technical scheme of the application clearly informs personal information processing rules before processing the personal information, and obtains personal independent consent. If the technical scheme of the application relates to sensitive personal information, a product applying the technical scheme of the application obtains individual consent before processing the sensitive personal information, and simultaneously meets the requirement of 'express consent'. For example, at a personal information collection device such as a camera, a clear and significant identifier is set to inform that the personal information collection range is entered, the personal information is collected, and if the person voluntarily enters the collection range, the person is regarded as agreeing to collect the personal information; or on the device for processing the personal information, under the condition of informing the personal information processing rule by using obvious identification/information, obtaining personal authorization by modes of popping window information or asking a person to upload personal information of the person by himself, and the like; the personal information processing rule may include information such as a personal information processor, a personal information processing purpose, a processing method, and a type of personal information to be processed.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (11)

1. A method for determining shooting parameters of an image acquisition device is characterized by comprising the following steps:
acquiring at least one image to be detected; the image to be detected is acquired by the image acquisition equipment;
determining the position of a head central point corresponding to at least one target object in the at least one image to be detected and a first body size corresponding to the at least one target object;
determining a second body size corresponding to the at least one target object according to the position of the head central point corresponding to the at least one target object; wherein the first body size, the second body size and the measurement direction are the same, and the measurement unit is the same, and the second body size is related to the shooting parameter;
and determining shooting parameters of the image acquisition equipment according to the first body size and the second body size corresponding to the at least one target object.
2. The method of claim 1, wherein determining the second body size corresponding to the at least one target object based on the position of the head center point corresponding to the at least one target object comprises:
determining pixel coordinates of two vertexes of a body area of the at least one target object according to the pixel coordinates of the head central point corresponding to the at least one target object; the pixel coordinates of the head central point are used for representing the position of the head central point corresponding to the at least one target object in the at least one image to be detected; the pixel coordinates of the two vertices of the body region differ in value in a direction of measurement of a first body dimension;
and determining a second body size corresponding to the at least one target object according to the pixel coordinates of the two vertexes of the body area of the at least one target object.
3. The method for determining according to claim 2, wherein determining the pixel coordinates of the two vertices of the body region of the at least one target object according to the pixel coordinates of the head center point corresponding to the at least one target object comprises:
determining world coordinates of the head central point corresponding to the at least one target object according to pixel coordinates of the head central point corresponding to the at least one target object and a preset conversion relation, wherein the preset conversion relation is determined according to the shooting parameters;
determining world coordinates of two vertexes of a body area of the at least one target object according to the world coordinates of the head central point corresponding to the at least one target object and preset body parameters;
and determining the pixel coordinates of the two vertexes of the body area of the at least one target object according to the preset conversion relation and the world coordinates of the two vertexes of the body area of the at least one target object.
4. The determination method according to claim 3, wherein the shooting parameters of the image capturing apparatus include: at least one of installation height, shooting focal length and shooting angle; the preset conversion relation is related to the shooting focal length and the shooting angle; the world coordinates of the head center point are related to the mounting height.
5. The method of determining according to claim 3 or 4, wherein determining a second body size corresponding to the at least one target object based on pixel coordinates of two vertices of the body region of the at least one target object comprises:
determining pixel coordinates of two vertices of a body region of the at least one target object, an offset in a measurement direction of a first body dimension;
and taking the offset as a second body size corresponding to the at least one target object under the condition that the offset is larger than a preset offset.
6. The determination method according to any one of claims 1 to 5, wherein before determining the shooting parameters according to the first body size and the second body size corresponding to the at least one target object, the determination method comprises:
determining a definition score corresponding to the at least one target object according to the size relation between the first body size corresponding to the at least one target object and a preset size; wherein the sharpness score is inversely related to a difference in size between the first body dimension and a predetermined dimension;
screening out at least one first object in all the target objects according to the definition score corresponding to the at least one target object, and taking the first object as a new target object;
determining the shooting parameters according to a first body size and a second body size corresponding to the at least one target object, including:
and determining the shooting parameters according to the first body size and the second body size corresponding to the new target object.
7. The determination method according to any one of claims 1 to 6, wherein the determining the shooting parameters according to the first body size and the second body size corresponding to the at least one target object comprises:
determining at least two first parameters according to the first body sizes corresponding to the at least two target objects and the corresponding second body sizes; wherein the different first parameters are all shooting parameters of the same type;
and fusing the at least two first parameters, and taking a fusion result as the shooting parameter.
8. The determination method according to any one of claims 1 to 7, wherein the determining of the position of the head center point corresponding to at least one target object in the at least one image to be detected and the first body size corresponding to the at least one target object comprises at least one of:
determining the position of a head central point corresponding to at least one target object in at least one image to be detected through a preset head central point detection model;
determining a detection frame corresponding to the at least one target object through a preset body detection model;
determining the first body size based on a detection frame corresponding to the at least one target object.
9. A device for determining shooting parameters of an image acquisition apparatus, the device comprising:
the image acquisition module is used for acquiring at least one image to be detected; the image to be detected is acquired by the image acquisition equipment;
the first body size determining module is used for determining the position of a head central point corresponding to at least one target object in the at least one image to be detected and the first body size corresponding to the at least one target object;
the second body size determining module is used for determining a second body size corresponding to the at least one target object according to the position of the head central point corresponding to the at least one target object; wherein the first body size and the second body size are measured in the same direction;
and the shooting parameter determining module is used for determining the shooting parameters according to the first body size and the second body size corresponding to the at least one target object.
10. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to invoke the memory-stored instructions to perform the determination method of any one of claims 1 to 8.
11. A computer-readable storage medium having computer program instructions stored thereon, which, when executed by a processor, implement the determination method of any one of claims 1 to 8.
CN202210457316.8A 2022-04-27 2022-04-27 Shooting parameter determining method and device of image acquisition equipment and electronic equipment Active CN114845055B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210457316.8A CN114845055B (en) 2022-04-27 2022-04-27 Shooting parameter determining method and device of image acquisition equipment and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210457316.8A CN114845055B (en) 2022-04-27 2022-04-27 Shooting parameter determining method and device of image acquisition equipment and electronic equipment

Publications (2)

Publication Number Publication Date
CN114845055A true CN114845055A (en) 2022-08-02
CN114845055B CN114845055B (en) 2024-03-22

Family

ID=82567811

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210457316.8A Active CN114845055B (en) 2022-04-27 2022-04-27 Shooting parameter determining method and device of image acquisition equipment and electronic equipment

Country Status (1)

Country Link
CN (1) CN114845055B (en)

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008160602A (en) * 2006-12-25 2008-07-10 Matsushita Electric Works Ltd Imaging device and human detecting device
JP2010282377A (en) * 2009-06-03 2010-12-16 Fujifilm Corp Image forming apparatus, program, and method
CN110287828B (en) * 2019-06-11 2022-04-01 北京三快在线科技有限公司 Signal lamp detection method and device and electronic equipment
CN111405181B (en) * 2020-03-25 2022-01-28 维沃移动通信有限公司 Focusing method and electronic equipment
CN111739086A (en) * 2020-06-30 2020-10-02 上海商汤智能科技有限公司 Method and device for measuring area, electronic equipment and storage medium
CN113989696B (en) * 2021-09-18 2022-11-25 北京远度互联科技有限公司 Target tracking method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN114845055B (en) 2024-03-22

Similar Documents

Publication Publication Date Title
CN110322500B (en) Optimization method and device for instant positioning and map construction, medium and electronic equipment
US10205896B2 (en) Automatic lens flare detection and correction for light-field images
CN110427917B (en) Method and device for detecting key points
WO2019161813A1 (en) Dynamic scene three-dimensional reconstruction method, apparatus and system, server, and medium
CN108833785B (en) Fusion method and device of multi-view images, computer equipment and storage medium
US9135678B2 (en) Methods and apparatus for interfacing panoramic image stitching with post-processors
WO2011049046A1 (en) Image processing device, image processing method, image processing program, and recording medium
CN111612842B (en) Method and device for generating pose estimation model
WO2021136386A1 (en) Data processing method, terminal, and server
CN108389172B (en) Method and apparatus for generating information
CN110349212B (en) Optimization method and device for instant positioning and map construction, medium and electronic equipment
CN112733820A (en) Obstacle information generation method and device, electronic equipment and computer readable medium
JP2016212784A (en) Image processing apparatus and image processing method
CN111402404B (en) Panorama complementing method and device, computer readable storage medium and electronic equipment
CN113724391A (en) Three-dimensional model construction method and device, electronic equipment and computer readable medium
CN108229281B (en) Neural network generation method, face detection device and electronic equipment
CN109981989B (en) Method and device for rendering image, electronic equipment and computer readable storage medium
CN113378605A (en) Multi-source information fusion method and device, electronic equipment and storage medium
CN114140771A (en) Automatic annotation method and system for image depth data set
CN114845055B (en) Shooting parameter determining method and device of image acquisition equipment and electronic equipment
CN115511870A (en) Object detection method and device, electronic equipment and storage medium
CN115761389A (en) Image sample amplification method and device, electronic device and storage medium
CN114677367A (en) Detection method, detection device, electronic equipment and storage medium
CN112615993A (en) Depth information acquisition method, binocular camera module, storage medium and electronic equipment
CN114708556A (en) Detection method, detection device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant