CN114966733B - Beef cattle three-dimensional depth image acquisition system based on laser array and monocular camera - Google Patents

Beef cattle three-dimensional depth image acquisition system based on laser array and monocular camera Download PDF

Info

Publication number
CN114966733B
CN114966733B CN202210421003.7A CN202210421003A CN114966733B CN 114966733 B CN114966733 B CN 114966733B CN 202210421003 A CN202210421003 A CN 202210421003A CN 114966733 B CN114966733 B CN 114966733B
Authority
CN
China
Prior art keywords
point
beef cattle
laser array
beef
depth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210421003.7A
Other languages
Chinese (zh)
Other versions
CN114966733A (en
Inventor
蒋涛
康涛
张华宾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Shunxin Futong Big Data Group Co ltd
Original Assignee
Beijing Fatoan Technology Group Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Fatoan Technology Group Co ltd filed Critical Beijing Fatoan Technology Group Co ltd
Priority to CN202210421003.7A priority Critical patent/CN114966733B/en
Publication of CN114966733A publication Critical patent/CN114966733A/en
Application granted granted Critical
Publication of CN114966733B publication Critical patent/CN114966733B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01GWEIGHING
    • G01G17/00Apparatus for or methods of weighing material of special form or property
    • G01G17/08Apparatus for or methods of weighing material of special form or property for weighing livestock
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • G01S17/8943D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/271Image signal generators wherein the generated image signals comprise depth maps or disparity maps

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a beef cattle three-dimensional depth image acquisition system based on a laser array and a monocular camera, which comprises a monocular global exposure CMOS sensor image acquisition unit, a laser array structure light emission unit, an image processing algorithm unit and a control unit; the image acquisition unit of the monocular global exposure CMOS sensor is provided with a monocular global camera capable of shooting two-dimensional images, and the monocular global camera capable of shooting two-dimensional images, the laser array structure light emission unit and the rear-end image processing algorithm unit form a 3D imaging system; acquiring beef body ruler depth information, acquiring other information based on the beef body ruler depth information, and acquiring beef body ruler parameters by calculating the distance between two points on the laser array pattern and the fixed included angle between the two points; the method has the advantages that the complexity of the beef cattle three-dimensional depth image acquisition system can be effectively reduced, the family breeding cost can be greatly reduced, and the requirement on the weighing precision of beef cattle can be met.

Description

Beef cattle three-dimensional depth image acquisition system based on laser array and monocular camera
Technical Field
The invention belongs to the technical field of automatic measurement of beef cattle body ruler data, and particularly relates to a beef cattle three-dimensional depth image acquisition system based on a laser array and a monocular camera.
Background
With the great improvement of living standard of people, the demand of meat products is also rising rapidly, so that the realization of modernized, large-scale and precise livestock breeding in China is urgent.
The fattening stage of agricultural breeding seeding is the beef growth stage, the feed-meat ratio is the most key parameter in the process, and the cost and the profit of farmers can be directly influenced. The traditional mode of measuring beef cattle weight adopts the mode of driving the cattle to pass the balance, the beef cattle weight data obtained by the mode is visual and low in cost, but the cattle can generate stress reaction in the process of driving the cattle to pass the balance, the cattle can not eat normally any more for a long time after the stress reaction is generated, or the cattle can stop growing meat, and even the situation of weight reduction or death occurs. In addition, if the weight change condition of the beef cattle can be mastered in real time, more scientific feeding and feed formula improvement are facilitated for farmers, so that the beef cattle can grow meat better and faster.
At present, the three-dimensional depth image acquisition system for beef cattle at home and abroad adopts two or more cameras to photograph images of the beef cattle, and then simulates the three-dimensional size of one beef cattle at the background through a three-dimensional matching algorithm. This approach results in poor real-time performance and high cost due to the large computational load of the algorithm. Through market price inquiry, the selling price of similar products at abroad is about 100 million per machine, similar products at home also need 40 million per machine, and at present, small-scale and family type beef cattle breeding is mainly used in China, so that farmers generally cannot bear the high price. The reason that the cost of the beef cattle three-dimensional depth image acquisition system at home and abroad is too high is not caused by the adoption of a plurality of cameras, the cameras are not the key of high cost, the key is that the algorithm and calibration among the cameras are extremely complex and difficult work, great research and development resources need to be invested, the cameras need to be calibrated for many times before leaving a factory, the requirement on the use environment is strict, and otherwise the weight estimation effect can be directly influenced. Therefore, a general multi-view measurement image acquisition system is commonly used for very precise visual measurement, and in a scene with high requirement on measurement precision, the cost is high, but in the weighing of cattle, the precision requirement is not a primary precision requirement, and if a very precise three-dimensional visual measurement method is adopted, the cost performance is extremely low.
Therefore, the invention provides a set of non-contact intelligent weighing equipment with low cost and high precision, which has important significance for farmers to scientifically feed.
The first difficulty in realizing the three-dimensional depth image acquisition system is that the cost is too high and the popularization is not favorable, and the second difficulty in realizing the three-dimensional depth image acquisition system comes from respective defects of the conventional image sensor. The existing CMOS image sensors can be classified into the following two types according to the difference of MOS transistor control logic modes: the method comprises the following steps that one mode is a rolling shutter type exposure mode, the other mode is a global exposure mode, images obtained by the rolling shutter type exposure mode are obtained in different time motion states, therefore, when a moving object is shot, the exposure mode can output a distorted fuzzy image, and meanwhile, when a flicker pulse light signal is illuminated in a picture, due to exposure logic, a complete light signal cannot be displayed in a frame of image; the image acquired by the global exposure mode is completed simultaneously in one exposure period, and the distortion of the image can not be generated when a high-speed moving object is shot. However, in the global exposure mode, a memory unit is required to be arranged in the pixel besides the structure of the electronic shutter, so that the electric signals generated in the exposure process can be stored in advance to wait for the reading of the readout circuit row by row. Because the storage electric signal occupies part of the storage unit, the filling factor used for storing the pixel in the storage unit is reduced, and the sensitivity of the image sensor is influenced.
As described above, the conventional image sensor has low sensitivity and affects the sensitivity of the image sensor, although the image is blurred when a moving object is captured or the image is not distorted when a moving object is captured.
Disclosure of Invention
The invention provides a beef cattle three-dimensional depth image acquisition system based on a laser array and a monocular camera, aiming at solving the problems in the prior art, and the first purpose is to find a way which can effectively reduce the complexity of the beef cattle three-dimensional depth image acquisition system and meet the requirement of the beef cattle weighing precision. The second purpose is to solve the problems of the prior art image sensor that the image is blurred when a fast moving object is shot, or the sensitivity of the image sensor is affected due to low sensitivity although the image distortion is not generated when a fast moving object is shot.
The invention provides the following technical scheme for solving the technical problems.
A beef cattle three-dimensional depth image acquisition system based on a laser array and a monocular camera comprises a monocular global exposure CMOS sensor image acquisition unit, a laser array structure light emission unit, an image processing algorithm unit and a control unit, wherein the monocular global exposure CMOS sensor image acquisition unit is used for optimizing a photosensitive unit and snapshotting a moving object at a high speed; the laser array structure light emitting unit projects a laser array in a near-infrared wave band, the laser array pattern after deformation is received by an image acquisition unit of the monocular global exposure CMOS sensor after being reflected by a target object, and finally the image processing unit calculates the three-dimensional depth position of the target object according to a shot picture;
the single-eye global exposure CMOS sensor is characterized in that an image acquisition unit of the single-eye global exposure CMOS sensor is provided with a single-eye global camera capable of shooting two-dimensional images, and the single-eye global camera capable of shooting two-dimensional images, a laser array structure light emission unit and a rear-end image processing algorithm unit form a 3D imaging system;
the image processing algorithm module is used for acquiring beef body ruler depth information, acquiring other information based on the beef body ruler depth information and acquiring beef body ruler parameters by calculating the distance between two points on the laser array pattern and a fixed included angle between the two points; the calculation module for acquiring the body ruler depth information of the beef cattle is as follows: the abdomen width maximum point A calculation module, the shoulder end point D calculation module and the ischial node F calculation module; the calculation module for acquiring other information based on the body size depth information of the beef cattle is as follows: astragalus membranaceus highest point G calculation module, body depth upper and lower end points H up And H down The beef cattle body symmetry plane calculation module is used for calculating the beef cattle body symmetry plane; the beef body ruler parameters are obtained by calculating the body oblique length, the body straight length, the shoulder width, the abdomen width, the body height and the body depth according to the 6 points.
Further, the laser array structure light emitting unit comprises a laser driving circuit for driving the laser array structure light source, a laser array structure light source for generating a near infrared band laser array, and a laser array emitting optical module for projecting the laser array onto a target object.
Further, the monocular global exposure CMOS sensor image acquisition unit comprises a laser array pattern receiving optical module for receiving the laser array pattern and a global exposure CMOS sensor; the global exposure CMOS sensor comprises a 1288X 1032V photosensitive unit, a floating gate source follower, a filter circuit and a pixel micro-optical structure, wherein the 1288X 1032V photosensitive unit is used for improving the response efficiency of detection laser and transmitting an electric signal after the response efficiency is improved to the floating gate source follower, the floating gate source follower is used for receiving and amplifying the electric signal transmitted by the 1288X 1032V photosensitive unit so as to improve the amplification capability of an output circuit, the filter circuit is used for receiving the amplified signal of the floating gate source follower and filtering out noise light in the environment, and the pixel micro-optical structure is used for filtering out light in other wave bands except the wave band of the detection laser light source.
Further, the parameters of the beef cattle three-dimensional depth image acquisition system are as follows: adopting a VCSELs surface array invisible near-infrared laser with model CL-VCLB71AA, wavelength 808nm and lattice quantity of 3000, wherein the acquisition distance of the beef cattle body type image is more than or equal to 5m; the beef cattle body shape image acquisition height is more than or equal to 2 times of the beef cattle body shape height; camera field angle: the target which can cover the whole model cattle is more than or equal to 25 degrees; laser emission angle: not less than 20 degrees; the photographing angle of the camera is as follows: the angle perpendicular to the side of the beef cattle; working temperature: -40 ℃ to 50 ℃; the monocular global camera works in cooperation with the laser array structure light emitting unit, has high response rate to near infrared light, adopts a fixed-focus lens form, enables the system to be light in weight and small in size, and can ensure the stability of an optical axis in the using process.
Further, the beef cattle body size data acquisition sub-unit based on the laser dot matrix and the beef cattle weight estimation sub-unit based on the laser dot matrix are included; the beef cattle body ruler data acquisition subunit based on the laser dot matrix comprises: the system initializes and extracts a beef cattle effective outline module, a beef cattle body ruler depth information acquisition module, an acquisition module of other information based on the beef cattle body ruler depth information, an acquisition module of beef cattle body ruler parameters and a beef cattle body ruler data precision comparison analysis module based on image processing; the beef cattle weight estimation subunit obtains the predicted weight of the beef cattle by constructing a relation model of animal body scale parameters and the weight, and is provided with a research module of a beef cattle weight estimation scheme based on image processing, a weight estimation model establishing module based on the body scale data, a beef cattle weight estimation precision comparison analysis based on the image processing and a beef cattle feed real-time adjustment precision feeding module.
Further, the abdomen width maximum point A calculation module for acquiring the body ruler depth information of the beef cattle comprises a region division submodule to which the minimum z value belongs, a distance calculation submodule between two points, a smooth minimum z value point curve module and an abdomen width maximum point A acquisition submodule; in the region to which the minimum z value belongs, the smaller the distance between the two points is, the closer the distance between the target object and the camera is, the larger the distance between the two points is, the farther the distance between the target object and the camera is, the point of the target object closest to the camera is the minimum z value point, and the minimum z value point is the maximum abdominal width point a.
Further, the shoulder end point D calculating module for acquiring the body ruler depth information of the beef cattle comprises a shoulder end point D region dividing submodule, a B and C special point identifying submodule and a distance maximum point calculating submodule; the special points B and C are respectively distributed at two sides of the shoulder end point D and are respectively positioned at the junction of the two sides of the shoulder end point D from rising to falling along the depth direction of the beef body ruler; and the D point calculating submodule finds a point with the maximum distance according to the distance from each point on a B, C special point two-point connecting line to the D point, the point with the maximum distance is also the minimum point of the z value in the region to which the point belongs, and the obtained point with the minimum distance is the shoulder end point D.
Further, the ischium node F calculation module for acquiring the body size depth information of the beef cattle comprises a region division submodule to which the ischium node F belongs, a k neighbor point z value calculation operator module and an ischium node F calculation submodule; the ischial node F calculation submodule determines an ischial node F according to the maximum value of the z values of the k adjacent points, the maximum value of the obtained z value is the ischial node F, the point is basically located at the middle point of the beef body, and the point can be used as a central point to calculate the abdomen width and the shoulder width of the beef cattle;
further, the milkvetch part highest point G calculation module based on the acquisition of other information of the beef body ruler depth information comprises a milkvetch part highest point G region division submodule, a calculation midpoint x coordinate calculation submodule, a plurality of slice point cloud submodules expanded along a midpoint x coordinate, a milkvetch part highest point G calculation submodule and a highest point submodule for searching a y axis, wherein the obtained highest point is the milkvetch part G;
body depth upper and lower endpoint H obtained based on other information of beef body ruler depth information up And H down The calculation module comprises a body depth belonging area division submodule, a reference point determination submodule, a plurality of slice point cloud submodules which are expanded left and right along the reference point, a vertex calculation submodule of each slice point cloud, a vertex searching submodule of each slice y axis and a lowest point searching submodule of each slice y axis, wherein the obtained y-axis highest point and the obtained y-axis lowest point are an upper end point H and a lower end point H of the body depth up And H down
Further, the research of the beef cattle weight estimation scheme based on image processing comprises a multiple linear regression model, a partial least square method and an RBF neural network model.
Advantageous effects of the invention
1. According to the invention, the monocular global camera after the photosensitive unit is optimized, the light emitting unit with the laser array structure covering the target object and the beef body scale key point calculation method based on the beef body depth information are organically combined, so that the effects of effectively reducing the complexity of a beef three-dimensional depth image acquisition system, greatly reducing the breeding cost of a family beef raiser and meeting the beef weighing precision requirement are realized, and as the monocular camera is adopted to realize 3D imaging, the algorithm among a plurality of cameras and the extremely complicated calibrated working link in the traditional method are avoided, the manufacturing cost is greatly reduced, and compared with similar products abroad and domestically, the market price is respectively reduced by more than 80% and 50%.
2. By adopting the monocular global camera after optimizing the photosensitive unit, the invention not only keeps the characteristic that the global camera can shoot the moving object at high speed, but also solves the problem that the global camera is weak in capacity of the photosensitive unit. The method comprises the following specific steps: the photosensitive effect of the global camera is improved by adopting a 1288X 1032V photosensitive unit which improves the laser response efficiency and transmits an electric signal after the response efficiency is improved to a floating gate source follower, a floating gate source follower which receives and amplifies the electric signal transmitted by the 1288X 1032V photosensitive unit so as to improve the amplification capability of an output circuit, a filter circuit which receives the floating gate source follower and amplifies signals and filters out noise light in the environment and a pixel micro-optical structure which filters out light in other wave bands except the laser light source wave band.
3. The invention finds an optimal balance point between the reduction of the product manufacturing cost and the satisfaction of the feed meat ratio precision requirement at the breeding, sowing and fattening stages, and on the basis of meeting the beef cattle weighing precision requirement, the full-automatic non-contact 3D imaging is realized through the simple structure of the monocular camera, the laser array structure light emitting unit and the rear-end algorithm, and the full-automatic non-contact 3D imaging of the product is not limited to whether the movement state of the cattle is static or walking, and as long as the requirement that the shooting direction of the camera is vertical to the side face of the cattle body is met, even if the shot object quickly walks, the shot object can shoot a precise and clear laser array pattern.
Drawings
FIG. 1 is a frame diagram of a three-dimensional depth image acquisition system for beef cattle according to the present invention;
FIG. 2 is a block diagram of an image processing unit according to the present invention;
FIG. 3 is a flow chart of the beef body size data acquisition subunit processing of the present invention;
FIG. 4 is a flow chart of the processing of the beef cattle weight estimation subunit based on the laser lattice according to the present invention;
FIG. 5 is a formula for calculating body size parameters of cattle according to the present invention;
FIG. 6-1 is a top view of the body profile data of beef cattle of the present invention;
FIG. 6-2 is a side view of the body shape data of beef cattle of the present invention;
FIG. 7 is a side view of a beef cattle depth image of the present invention;
FIG. 8-1 is a schematic diagram of the minimum z-value point in the original point cloud according to the present invention;
FIG. 8-2 is a diagram illustrating a minimum z-value point of the present invention;
FIG. 9 is a plan view of the beef cattle depth image projected onto xoz in accordance with the present invention;
FIG. 10 is a schematic view of the maximum abdominal width of a beef cattle according to the present invention;
FIG. 11 is a schematic view of the shoulder end of a beef cattle of the present invention;
FIG. 12 is a schematic view of an ischial junction of the present invention;
FIG. 13 is a schematic view of the highest point of the milk vetch portion of the beef cattle;
FIG. 14 is a schematic view of the upper and lower end points of the body depth of a beef cattle of the present invention.
Detailed Description
Design principle of the invention
1. 3D imaging system principle based on monocular camera: the invention does not use a plurality of cameras and image calibration to restore the three-dimensional image of the cow, but uses a simple method, and only uses one camera in the 3D imaging system. One camera can only shoot one two-dimensional image, for example, the side of a cow body is shot, and one camera can only obtain two-dimensional images of the length and the height of the cow and also lacks a depth image of the cow. In order to obtain a depth image of a cow, the invention adopts a method of shooting a dot matrix laser, for example, a 3000-dot matrix laser, each angle of the dot matrix laser is determined, one laser angle is determined, when the dot matrix laser is shot to a position of 5 meters, a fixed distance is reserved between two points, and when the dot matrix laser is shot to a position of 3 meters, a fixed distance is reserved between the two points, so that the active distance between a target point and a camera can be obtained by reading the distance between the two points on a target object. In order to obtain a depth image of the cow, values of different depths of the cow are obtained by reading the distance between two points. The distance between the two points is the distance between the two points on the laser array pattern after the laser array structure light emitting unit projects a laser array with a near infrared wave band to the cow body and is reflected by the target object to deform.
2. Calculating a key point principle based on the depth information of the cattle body: the general idea is as follows: firstly, dividing three regions of a cattle body with depth characteristics; secondly, respectively finding out depth feature points of each region: the abdomen width maximum point A, the shoulder end point D, the ischial node F and an auxiliary calculation characteristic point B, C; thirdly, finding out three other key points (the highest point G of the milkvetch part, the upper and lower end points H of the body depth) according to the known depth characteristic point A, D, F, B, C up And H down ) And fourthly, obtaining a y coordinate corresponding to each x coordinate according to the x coordinates of the other three key points in the two-dimensional space, namely height information.
1) Regarding the acquisition of the maximum abdominal width point a, the shoulder end point D, and the ischial node F: the shooting and emitting angles of the camera and the array laser emitting unit are perpendicular to the side of the cattle body, so that light emitted by the laser array can cover the side of the cattle body; secondly, dividing a laser array pattern shot by a camera into a plurality of regions, including a region of an abdomen width maximum point A, a region of a shoulder end point D and a region of an ischial junction F, wherein due to different depths of a cow body, a laser dot matrix pattern reflected by a target object by the laser array is inevitably deformed, namely the distance between two points at different depths is different, and the abdomen width maximum point A, the shoulder end point D and the ischial junction F are respectively found out according to the distance between the two points in each region. The maximum abdomen width point a is the minimum z value point in the region (the distance between the point on the object and the camera is the nearest in the region); the shoulder end point D is the minimum point of the z value in the region (the distance between the point on the object and the camera is the nearest in the region); the ischial node F is the point with the maximum z value in the region (the distance between the point on the object and the camera is farthest in the region); the above method for dividing the region of the shoulder end point D is: firstly, finding out a point B and a point C of which the slopes on two sides of the point D are changed respectively, wherein the point B and the point C are respectively at the junction of the slopes rising to falling along the depth direction of the z axis (assuming that the direction of the z axis is the depth direction of the cow body), and the principle of calculating the point B and the point C is that the point with the maximum z value in the two slopes rising to falling area is respectively found out as the point B and the point C according to the difference of the depths in the z axis.
2) The highest point G of the upper part of the milkvetch root, the upper end point H and the lower end point H of the body depth up And H down Obtaining: first, the highest point G of the milkvetch root is obtained. The highest point G of the astragalus root is height information rather than depth information, but the height information can be obtained based on the depth information, and the height information can be obtained by the depth information: firstly, calculating x coordinates of midpoints of B, D two points by means of known depth information points D, B, expanding a plurality of slice point clouds along the left and right sides of an x axis after the x coordinates exist, firstly calculating the highest point of each slice point cloud, and then calculating the highest points of the plurality of slice point clouds, wherein the highest point is the highest point G of the portion astragal; second, the upper and lower end points H of body depth up And H down Obtaining: upper and lower end points H of body depth up And H down Is obtained as the height information of the cattle body instead of the depth information, but the height information is obtained based on the depth information and can be obtained before the depth information: firstly, with the help of a known maximum abdomen width point A of depth information and the same reference point of the point A x coordinate, expanding 2 point clouds of slices along the x axis to obtain the highest point and the lowest point of each point cloud of slices, and then finding out the highest point from the highest points of all the slices as an upper end point H of the body depth up Finding out the lowest point from the lowest points of all the slices as a body depth lower endpoint H down
3. The design principle of combining the monocular global camera and the laser array structured light is as follows: aiming at the requirement that beef cattle are always in a motion state, the invention designs a monocular global exposure CMOS sensor image acquisition unit capable of optimizing the sensitivity of a global exposure sensor in order to obtain a clearer image. The method for optimizing the photosensitive unit comprises the following steps: increase the response efficiency to laser through optimizing sensitization unit and optimizing floating gate source follower, the little optical structure of rethread design lets it only receive the light of surveying laser wavelength, specifically does: a narrow-band filter and a conical light channel structure are added in a micro-optical structure to shield noise light in other wave bands in ambient light, and then a pixel level filter circuit is added to only allow laser signals to pass through and filter noise light in the same wave band but different in frequency with laser in the environment, so that the image sensor only displays light irradiated by the laser and shields other noise light;
the monocular global camera and the laser array structured light are interdependent, but the difference is that if the light sensitivity of the global exposure sensor is not optimized, even though the laser array is projected on a target object, the light sensitivity of the global exposure sensor is too low, the image of the laser array is blurred, and the accuracy of distance calculation between two points of the laser array is influenced; if only the sensitivity of the global exposure sensor is effectively optimized, but algorithms and calibration among a plurality of cameras are still adopted, the problem that cost reduction is beneficial to popularization cannot be solved, and the two methods must be organically combined to generate the effect after combination.
Based on the principle of the invention, the invention designs a beef cattle three-dimensional depth image acquisition system based on a laser array and a monocular camera.
A beef cattle three-dimensional depth image acquisition system based on a laser array and a monocular camera is shown in figure 1 and comprises a monocular global exposure CMOS sensor image acquisition unit, a laser array structure light emission unit, an image processing algorithm unit and a control unit, wherein the monocular global exposure CMOS sensor image acquisition unit is used for optimizing a photosensitive unit and snappingly shooting a moving object at a high speed; the laser array structure light emitting unit projects a laser array of a near-infrared waveband, the laser array pattern after deformation is received by an image acquisition unit of the monocular global exposure CMOS sensor after being reflected by a target object, and finally the image processing unit calculates the three-dimensional depth position of the target object according to a shot picture;
the method is characterized in that an image acquisition unit of the monocular global exposure CMOS sensor is provided with a monocular global camera capable of shooting two-dimensional images, and the monocular global camera capable of shooting two-dimensional images, a laser array structure light emission unit and a rear-end image processing algorithm unit form a 3D imaging system;
as shown in fig. 6-1, 6-2, 7__ fig. 14, the image processing algorithm module calculates the laser arrayAcquiring beef body ruler depth information, acquiring other information based on the beef body ruler depth information and acquiring beef body ruler parameters by using the distance between two points on the pattern and a fixed included angle between the two points; the calculation module for acquiring the body ruler depth information of the beef cattle is as follows: the abdomen width maximum point A calculation module, the shoulder end point D calculation module and the ischial node F calculation module are arranged in the abdomen width maximum point A calculation module; the calculation module for acquiring other information based on the body size depth information of the beef cattle is as follows: astragalus membranaceus highest point G calculation module, body depth upper and lower end points H up And H down The beef cattle body symmetry plane calculation module is used for calculating the beef cattle body symmetry plane; the beef body ruler parameters are obtained by calculating the body slant length, the body straight length, the shoulder width, the abdomen width, the body height and the body depth according to the 6 points.
As shown in fig. 1, the laser array structured light emitting unit includes a laser driving circuit for driving the laser array structured light source, a laser array structured light source for generating a laser array in a near infrared band, and a laser array emitting optical module for projecting the laser array onto a target object.
As shown in fig. 1, the monocular global exposure CMOS sensor image capturing unit includes a laser array pattern receiving optical module for receiving a laser array pattern, and a global exposure CMOS sensor; the global exposure CMOS sensor comprises a 1288 × 1032V photosensitive unit, a floating gate source follower, a filter circuit and a pixel micro-optical structure, wherein the 1288 × 1032V photosensitive unit is used for improving the response efficiency of detection laser and transmitting an electric signal after the response efficiency is improved to the floating gate source follower, the floating gate source follower is used for receiving and amplifying the electric signal transmitted by the 1288 × 1032V photosensitive unit so as to improve the amplification capacity of an output circuit, the filter circuit is used for receiving the signal amplified by the floating gate source follower and filtering noise light in the environment, and the pixel micro-optical structure is used for filtering light in other wavebands except the waveband band of the detection laser light source.
Supplementary explanation:
the 1288 × 1032V photosensitive unit is made of silicon and is structurally provided with a floating gate MOS capacitor, and the structure directly samples photo-generated electronic signals through the potential coupling effect; the wafer thickness of the globally exposed CMOS sensor chip is made to be 12-15 μm, so that the photon absorption efficiency of the 1288X 1032V photosensitive unit to 808nm is improved to 65.32% from the current 6.73%.
The floating gate source follower is a follow amplification circuit of a 1288 × 1032V photosensitive unit, the floating gate source follower comprises a manufacturing process of the floating gate source follower and a signal storage node capacitor, in the manufacturing process of the floating gate source follower, the dosage of boron atoms injected in the first step in an isolation mode is 5.5e12cm & lt-3 & gt, and the injection energy is 121KeV; secondly, preventing the dosage of the boron atoms injected in a penetrating way from being 1.2e13cm-3, and the injection energy is 38KeV; thirdly, adjusting the dosage of phosphorus atoms injected into the floating gate source follower to be 0cm-3 and the injection energy to be 0KeV, so that the length of a channel of the floating gate source follower is 450nm; the signal storage node capacitance is designed to be 1.8fF, making the floating gate source follower voltage 0.7V.
The spherical surface diameter of the pixel micro-optical structure is 4.0 mu m, the thickness of the pixel micro-optical structure is 1.0 mu m, and a layer of narrow-band-pass filter film with the passing wavelength of 808 +/-2.5 nm is plated on the micro-lens on the surface of the pixel, so that light in other wave bands except the wave band of the detection laser light source is filtered, and the signal-to-noise ratio of incident light is improved; the micro-optical structure adopts the pixel structure of a conical optical channel to work in combination with a micro-lens, and the conical optical channel structure is of an inverted trapezoidal structure.
The high-frequency cut-off frequency of the filter circuit is 200Hz, and the low-frequency cut-off frequency is 40Hz, so that the image sensor is ensured to only receive a detection laser image with the frequency of 120Hz, and a light reflection light image which is the same as the laser wavelength but different in frequency in other environment light is filtered, and the signal-to-noise ratio of the image is improved.
The parameters of the beef cattle three-dimensional depth image acquisition system are as follows: adopting a VCSELs surface array invisible near-infrared laser with model CL-VCLB71AA, wavelength 808nm and lattice quantity of 3000, wherein the acquisition distance of the beef cattle body type image is more than or equal to 5m; the beef cattle body shape image acquisition height is more than or equal to 2 times of the beef cattle body shape height; camera field angle: the target which can cover the whole model cattle is more than or equal to 25 degrees; laser emission angle: not less than 20 degrees; the photographing angle of the camera is as follows: the angle perpendicular to the side of the beef cattle; working temperature: -40 ℃ to 50 ℃; the monocular global camera works in cooperation with the laser array structure light emitting unit, has high response rate to near infrared light, adopts a fixed-focus lens form, enables the system to be light in weight and small in size, and can ensure the stability of an optical axis in the using process.
The image processing unit comprises a beef body size data acquisition subunit based on a laser dot matrix and a beef body weight estimation subunit based on the laser dot matrix as shown in fig. 2 and 3; beef cattle body ruler data acquisition subunit based on laser dot matrix includes: the system initializes and extracts an effective outline module of the beef cattle, a beef cattle body ruler depth information acquisition module, an acquisition module of other information based on the beef cattle body ruler depth information, an acquisition module of beef cattle body ruler parameters and a beef cattle body ruler data precision comparison analysis module based on image processing; the beef cattle weight estimation subunit based on the laser dot matrix obtains the predicted weight of the beef cattle by constructing a relation model of animal body scale parameters and weight, and is provided with a research module of a beef cattle weight estimation scheme based on image processing, a weight estimation model establishing module based on body scale data, a beef cattle weight estimation precision comparison analysis based on image processing and a beef cattle feed real-time adjustment precision feeding module.
As shown in fig. 7__ and fig. 14, the abdomen width maximum point a calculation module for acquiring the body ruler depth information of a beef cattle comprises a region division submodule to which the minimum z value belongs, a distance calculation submodule between two points, a smooth minimum z value point curve module and an abdomen width maximum point a acquisition submodule; in the region to which the minimum z value belongs, the smaller the distance between the two points is, the closer the distance between the target object and the camera is, the larger the distance between the two points is, the farther the distance between the target object and the camera is, the point of the target object closest to the camera is the minimum z value point, and the minimum z value point is the maximum abdominal width point a.
As shown in fig. 7__ and fig. 14, the shoulder end point D calculation module for acquiring the body ruler depth information of a beef cattle comprises a region division submodule to which the shoulder end point D belongs, a B and C special point identification submodule and a distance maximum point calculation submodule; the B and C special points are respectively arranged at two sides of a shoulder end point D and are respectively positioned at a junction where slopes of two sides of the shoulder end point D along the depth direction of the beef body ruler rise to fall; and the D point calculating submodule finds a point with the maximum distance according to the distance from each point on a B, C special point two-point connecting line to the D point, the point with the maximum distance is also the minimum point of the z value in the region to which the point belongs, and the obtained point with the minimum distance is the shoulder end point D.
As shown in fig. 7__ fig. 14, the ischium node F calculation module for acquiring body size depth information of a beef cattle comprises an ischium node F belonging region division submodule, k neighboring point z value operator modules and an ischium node F calculation submodule; and the ischial node F calculation submodule determines the ischial node F according to the maximum value of the z values of the k adjacent points, the maximum value of the obtained z value is the ischial node F, the point is basically located at the middle point of the cattle body, and the point can be used as a central point to calculate the abdomen width and the shoulder width of the beef cattle.
As shown in fig. 7__ and fig. 14, the calculating module for the highest point G of the yagi based on the other information of the beef body ruler depth information comprises a sub-module for dividing the area to which the highest point G of the yagi belongs, a sub-module for calculating the x coordinate of the midpoint, a sub-module for extending a plurality of slice point clouds along the x coordinate of the midpoint, a sub-module for calculating the highest point G of the yagi, and a sub-module for finding the highest point of the y axis, wherein the highest point is the yagi point G;
as shown in fig. 7__ and fig. 14, the body depth upper and lower end points H obtained based on the other information of the body size depth information of the beef cattle up And H down The calculation module comprises a body depth belonging area division submodule, a reference point determination submodule, a plurality of slice point cloud submodules which are expanded left and right along the reference point, a vertex calculation submodule of each slice point cloud, a vertex searching submodule of each slice y axis and a lowest point searching submodule of each slice y axis, wherein the obtained y-axis highest point and the obtained y-axis lowest point are an upper end point H and a lower end point H of the body depth up And H down
The research of the beef cattle weight estimation scheme based on image processing comprises a multiple linear regression model, a partial least square method and an RBF neural network model, and the RBF neural network model is finally selected for estimating the weight of the beef cattle.
Comparison on several beef cattle weight estimation protocols:
the multiple linear regression method is one of the most commonly used methods in regression analysis and has wide application in production practice. However, the multiple linear regression method requires no multiple collinearity among independent variables, so before the regression equation is established, the collinearity among the independent variables needs to be detected, otherwise, the error of the dependent variable will be increased. Meanwhile, the multiple linear regression method may ignore the non-linear relationship between the dependent variable and the independent variable, resulting in an increase in prediction error.
The partial least squares method minimizes the sum of squares of the errors to obtain the best function match, and can model under the condition that the independent variables have severe multicollinearity. The prediction precision of the partial least square model is higher than that of a multiple linear regression model and lower than that of an RBF neural network model.
The RBF neural network comprises an input layer, a hidden layer and an output layer, has excellent approximation performance on a nonlinear function, can overcome the problem of co-linearity between independent variables and the problem of nonlinearity between the independent variables and dependent variables, and can obtain better prediction accuracy. The method collects 500 groups of beef cattle body size data in total, establishes a RBF neural network model of weight and body size parameters, and shows that the correlation coefficient R of the predicted weight value and the measured value is 2 0.998, the goodness of fit is higher than 0.935 of the linear regression model, and the RBF neural network prediction model eliminates the collinearity problem of the original independent variables in the linear regression analysis.
Based on the beef cattle three-dimensional depth image acquisition system, the invention also designs a beef cattle three-dimensional depth image acquisition method based on the laser array and the monocular camera, and the method comprises the following steps as shown in figure 3:
firstly, initializing a system and extracting an effective profile of a beef cattle;
the system initialization is that the acquisition distance of the beef cattle body shape image is more than or equal to 5m; the beef cattle body shape image acquisition height is more than or equal to 2 times of the beef cattle body shape height; camera field angle: the target is more than or equal to 25 degrees and covers the whole model cattle; laser emission angle: not less than 20 degrees; the shooting angle of the camera is an angle vertical to the side body of the beef cattle; working temperature: -40 ℃ to 50 ℃;
supplementary explanation:as shown in fig. 7, the effective profile of the beef cattle is shown in fig. 7, the camera photographs the side of the beef cattle, the head and tail of the side of the beef cattle are removed, and the extracted effective profile is shown in fig. 7.
The method for extracting the effective profile of the beef cattle comprises the following steps:
1) Calculating the envelope size of the beef cattle: as shown in fig. 7, the length of the horizontal direction of the head of the beef cattle is calculated to be 15% -25% of the horizontal length of the body, the two proportions are respectively set to be 20% through experiments, and images which are not needed by the head of the beef cattle can be removed by a straight-through filtering method;
2) Because the tail of the obtained beef cattle image may have a tilting state, in order to reduce the influence of the beef cattle tail on the envelope image, the images with the number of tail pixel points less than 15 of the beef cattle image are removed.
Acquiring body ruler depth information of the beef cattle;
the beef body ruler depth information acquisition is that: acquiring the maximum abdomen width point A of the cattle body depth information __, acquiring the shoulder end point D of the cattle body depth information __ and acquiring the information of the sciatic junction node F of the cattle body depth information __;
supplementary explanation:as shown in fig. 10, point a is the maximum abdominal width point a, point D is the shoulder end point D, as shown in fig. 11, and point F is the ischial junction point F, as shown in fig. 12. The three points have different Z values and represent different depths, the point with the minimum Z value is the point A, which indicates that the point A is closest to the camera, the Z value of the point D is greater than that of the point A, and indicates that the point D is farther from the camera than that of the point A; the z values of points F and D are similar, indicating that point F is also farther from the camera than point a.
As shown in fig. 8-1, 8-2, and 9\ u 10, the specific process of acquiring the cow body depth information __ abdomen width maximum point a is as follows:
1) The minimum z-value point is extracted,
a. two-dimensionally projecting a side viewpoint xoz plane to obtain a top view; the y-value coordinate reflects the spatial position relationship between the point and the point; wherein the x-axis represents the length direction of the beef cattle flank and the z-axis represents the depth direction of the beef cattle flank;
b. finding each slice along the x-axisCombining the minimum points of the z values on the point cloud into a beef cattle depth image discrete point array P zmin
c. Dispersing the beef cattle depth image into a point row P zmin Projected to xoz plan;
2) Smoothing the minimum z-value point array;
using a unary cubic polynomial z = a 0 +a 1 x+a 2 x 2 +a 3 x 3 Fitting is carried out, and the fitting steps are as follows:
a. obtaining the total number of points N of the point column p Let the number of points of each polynomial fitting be n p From the (n) th p + 1) point to (N) p -n p ) Points, each extending forward and backward by n with each point as a reference point p Dot with this (2 n) p + 1) points are subjected to polynomial fitting, and the z value of the reference point is solved by using the fitting polynomial according to the x value of the reference point, so that the fitted points can be obtained.
b. For 1 st to n th p Fitting of points is reversed, using the (n) th p + 1) calculating the fitting polynomial of the points one by one; similarly, for the (N) th p -n p + 1) points to Nth point p Fitting of points is solved back by (N) p -n p ) Calculating fitting polynomials of the points one by one;
c. setting fitting points n p A value of 15 gives a smoother de dot sequence after fitting.
3) The abdominal width maximum point a is obtained, as shown in fig. 10,
finding the minimum z-value point from the fitted image is the maximum abdomen width point a.
As shown in fig. 11, the process of acquiring the body depth information __ shoulder end point D is as follows:
the identification of the shoulder end point D can be determined from the maximum distance from the point in the B, C point region to the B, C connection:
1) And B, identification of points: point B is the turning point with the slope rising first and falling second in the A, D point area, so on the basis of the characteristic, A (x) A ,z A ) Taking the point as a starting point, and calculating the number N of points in the depth image along the positive direction of the x axis AL Then sequentially obtain each point P i (x pi ,z pi )(i=1,2,……,N AL ) Angle theta between the line connecting point A and the x axis i Namely:
Figure BDA0003607634430000141
in order to improve the fault tolerance rate of B point identification, if k is more than or equal to 1 and less than or equal to N from the k point AL ) Start 5 successive points all having theta k <0.5×(θ k-1k-2 ) When the judgment is carried out, the point corresponding to the maximum theta angle in the (k + 4) points is obtained and is the point B;
2) And C, identification of points: since the slope near point C is also rising first and then falling, the identification method of point C is similar to that of point B, but the number N of points in the area of point B as the starting point and the forward point along the x-axis is required to be obtained first BL Then, the envelope map of the point cloud is obtained, and the starting point E (x) of the point similar to A is obtained E ,z E ) For each point
Figure BDA0003607634430000151
Counting in turn starting with point E and->
Figure BDA0003607634430000152
Angle theta between the line connecting point of (a) and point E and the x-axis j Namely:
Figure BDA0003607634430000153
to obtain the included angle theta j Then, the judging method of the point C is the same as that of the point B;
3) 5363 and after identifying the point B, C, calculating the distance from each point to a B, C connecting line in the B, C point area, wherein the point corresponding to the maximum distance is the point D.
As shown in fig. 12, the process of acquiring the cattle body depth information __ ischial node F is as follows:
1) Determining that the ischial node F is positioned near the minimum point of the x value;
2) K adjacent points of the ischial node F are obtained;
3) And taking the maximum z value point of the k adjacent points as an ischial junction node F, wherein the maximum z value point is the point which is farthest from the camera in the k adjacent points.
Acquiring other information based on the beef body size depth information;
the acquisition of other information based on the body size depth information of the beef cattle is as follows: highest point G of the astragalus root, upper and lower end points H of body depth up And H down The information of (a);
supplementary explanation:
as shown in fig. 13, the highest point G of the milkvetch part is height information, not depth information, but the height information is obtained based on the depth information, and the height information is obtained from the depth information: firstly, calculating the x coordinate of the midpoint of B, D two points by using the known depth information point D, B, expanding a plurality of slice point clouds along the left and right sides of an x axis after the x coordinate exists, firstly calculating the highest point of each slice point cloud, and then calculating the highest points of the plurality of slice point clouds, wherein the highest point is the highest point G of the astragalus portion;
as shown in FIG. 14, the upper and lower end points H of the body depth up And H down Obtaining: upper and lower end points H of body depth up And H down Is obtained as the height information of the cow body instead of the depth information, but the height information can be obtained based on the depth information and can be obtained from the depth information: firstly, with the help of a known maximum abdomen width point A of depth information and the same reference point of the point A x coordinate, expanding 2 point clouds of slices along the x axis to obtain the highest point and the lowest point of each point cloud of slices, and then finding out the highest point from the highest points of all the slices as an upper end point H of the body depth up Finding out the lowest point from the lowest points of all the slices as a body depth lower endpoint H down
The method for acquiring the information of the highest point G of the milkvetch part comprises the following specific steps:
1) Determining the highest point of the upper part of the foot, wherein the highest point corresponding to the middle point of the two points of the forelimb B, D represents the highest point;
2) Calculating the x coordinate of the middle point of B, D two points, and expanding 2 slice point clouds left and right along the x axis by taking the x coordinate as a datum point to obtain the highest point of each slice point cloud;
3) And then, the highest point of the milkvetch part is represented by the highest point in the highest points of all the slice point clouds.
Obtaining key body ruler measuring points __ body depth upper and lower end points H of beef cattle up And H down The specific process is as follows:
1) Determining upper and lower end points of the body depth corresponding to the maximum point A of the abdomen;
2) Expanding 2 point clouds of slices left and right along an x axis by taking the x coordinate of the point A as a datum point;
3) Solving the highest point and the lowest point of each slice point cloud;
4) Then, the highest point of all the highest points is used as the upper end point of the body depth, and the lowest point of all the lowest points is used as the lower end point of the body depth, so that the upper end point H of the body depth can be obtained up And a lower endpoint H down
Step four, acquiring the body size parameters of the beef cattle, wherein the specific process is as follows:
the beef body size parameters are as follows: oblique length, straight length, shoulder width, abdomen width, body height and body depth;
(1) Oblique length: the distance from the shoulder end point D of the beef cattle to the ischial node F on the same side;
the body slant length formula is shown in fig. 5: length of body slope L X : from the shoulder end point D (x) of beef cattle D ,y D ,z D ) To ischial node F (x) F ,y F ,z F ) Expressed in euclidean distance.
The body is straight and long: the horizontal distance from the end point D of the shoulder of the beef cattle to the vertical line of the ischium node F is referred to;
the body length formula is shown in fig. 5: length of body L: from the shoulder end point D (x) of beef cattle D ,y D ,z D ) To the ischial node F (x) F ,y F ,z F ) Horizontal distance representation of vertical lines, i.e. x D To x F The difference between them.
Shoulder width: the maximum width of the shoulder ends D of the two sides of the beef cattle;
the shoulder width formula is shown in fig. 5: width of shoulder W S : from the shoulder end point D (x) of beef cattle D ,y D ,z D ) To the ischial node F (x) F ,y F ,z F ) Depth distance 2 times.
(4) Abdomen width: the maximum width of a point A on the abdomen of the beef cattle;
the formula for the abdomen width is shown in fig. 5: abdomen width W A : from the maximum point A (x) of the beef cattle abdomen A ,y A ,z A ) To the ischial node F (x) F ,y F ,z F ) Depth distance is 2 times.
(5) Body height: the vertical distance from the highest point G of the astragalus membranaceus to the ground;
the body height formula is shown in fig. 5: body height H: the highest point G (x) of the beef milkvetch root G ,y G ,z G ) The distance to the ground plane indicates that the y-axis of the ground plane is a plane with a coordinate of 0.
(6) Depth of body: refers to the vertical distance from the lumbar vertebra to the bottom of the abdomen a at the maximum abdominal circumference.
The body depth formula is shown in fig. 5: depth of body D u,d : upper and lower end points of body depth
Figure BDA0003607634430000171
And
Figure BDA0003607634430000172
is vertical height representation of->
Figure BDA0003607634430000173
To>
Figure BDA0003607634430000174
The difference between them.
And step five, carrying out precision comparison analysis on the beef body size data based on image processing.
Embodiment I method for analyzing errors of beef cattle weight estimation results
To verify the accuracy of the multiple linear regression Model (MLR), partial least squares model (PLS), BP neural network model and RBF neural network model to predict weight, the predicted values of beef cattle weight were compared with the measured values, and the model prediction accuracy was analyzed using the following indices.
When the model training is completed, the quality of the model needs to be judged through an evaluation index, and if the model does not meet the requirement of the evaluation index, the model can be optimized through the index. Different types of problems often need to be evaluated by specifically selecting different evaluation indexes. The following three criteria are often used to evaluate the regression problem model, which are Mean Absolute Error (MAE), root Mean Square Error (RMSE), and coefficient of determination R 2 (R-Square)。
(1) Mean absolute error MAE
MAE mean absolute error is the result of summing the absolute values of the deviation of each observed value from the arithmetic mean and averaging, where the absolute values are taken to prevent the errors from canceling each other. The method has the advantages that the actual error condition between the predicted value and the actual value can be intuitively and accurately reflected, and the defect that an absolute value function is not easy to be derived in the optimization process of the algorithm is overcome. It is calculated by the formula t' i Indicates the predicted value, t i Representing the true value.
Figure BDA0003607634430000175
(2) Root mean square error RMSE
RMSE root mean square error is the residual of the standard deviation (prediction error). The residual error is used for measuring how far the measurement is from the regression line data point; RMSE measures the extent of these residual distributions. RMSE indicates how concentrated the data is near the line of best fit. Root mean square error is commonly used in climate, prediction and regression analysis to validate experimental results. The RMSE is calculated as:
Figure BDA0003607634430000181
(3) Determining the coefficient R 2
R 2 Is an important statistic that reflects the goodness of fit of the model, as the ratio of the regression sum of squares to the sum of the total squares. R is 2 The value is between 0 and 1, and has no unit,the magnitude of the value reflects the relative degree of regression contribution, i.e., the percentage of the total variation in the dependent variable Y that the regression relationship can interpret. R 2 The closer to 1, the better the regression equation fitted.
Figure BDA0003607634430000182
It should be emphasized that the above-described embodiments are merely illustrative of the present invention and are not limiting, since modifications and variations of the above-described embodiments, which are not inventive, may occur to those skilled in the art upon reading the specification, are possible within the scope of the appended claims.

Claims (8)

1. A beef cattle three-dimensional depth image acquisition system based on a laser array and a monocular camera comprises a monocular global exposure CMOS sensor image acquisition unit, a laser array structure light emission unit, an image processing algorithm unit and a control unit, wherein the monocular global exposure CMOS sensor image acquisition unit is used for optimizing a photosensitive unit and snapshotting a moving object at a high speed; the laser array structure light emitting unit projects a laser array in a near-infrared wave band, the laser array pattern after deformation is received by an image acquisition unit of the monocular global exposure CMOS sensor after being reflected by a target object, and finally the image processing unit calculates the three-dimensional depth position of the target object according to a shot picture;
the single-eye global exposure CMOS sensor is characterized in that an image acquisition unit of the single-eye global exposure CMOS sensor is provided with a single-eye global camera capable of shooting two-dimensional images, and the single-eye global camera capable of shooting two-dimensional images, a laser array structure light emission unit and a rear-end image processing algorithm unit form a 3D imaging system;
the image processing algorithm module acquires the beef body ruler depth information and other beef body ruler depth information based on the beef body ruler depth information by calculating the distance between two points on the laser array pattern and the fixed included angle between the two pointsAcquiring information and beef cattle body size parameters; the calculation module for acquiring the body ruler depth information of the beef cattle is as follows: the abdomen width maximum point A calculation module, the shoulder end point D calculation module and the ischial node F calculation module; the calculation module for acquiring other information based on the body size depth information of the beef cattle is as follows: astragalus membranaceus highest point G calculation module, body depth upper and lower end points H up And H down The device comprises a calculation module and a beef cattle body symmetry plane calculation module; the beef body ruler parameters are obtained by calculating the body slant length, the body straight length, the shoulder width, the abdomen width, the body height and the body depth according to the 6 points;
the laser array structure light emission unit comprises a laser driving circuit for driving a laser array structure light source, a laser array structure light source for generating a near-infrared band laser array, and a laser array emission optical module for projecting the laser array onto a target object
The monocular global exposure CMOS sensor image acquisition unit comprises a laser array pattern receiving optical module for receiving the laser array pattern and a global exposure CMOS sensor; the global exposure CMOS sensor comprises a 1288X 1032V photosensitive unit, a floating gate source follower, a filter circuit and a pixel micro-optical structure, wherein the 1288X 1032V photosensitive unit is used for improving the response efficiency of detection laser and transmitting an electric signal after the response efficiency is improved to the floating gate source follower;
the 1288 × 1032V photosensitive unit is made of silicon and is structurally provided with a floating gate MOS capacitor, and the structure directly samples photo-generated electronic signals through the potential coupling effect; the wafer thickness of the globally exposed CMOS sensor chip is made to be 12-15 mu m, so that the absorption efficiency of the 1288X 1032V photosensitive unit to photons with the wavelength of 808nm is improved to 65.32% from the current 6.73%;
the floating gate source follower is a following amplifying circuit of 1288 × 1032V photosensitive units and comprises a floating gateIn the manufacturing process of the floating gate source follower, the dosage of boron atoms injected in the first step in an isolation mode is 5.5e12cm -3 The implantation energy is 121KeV; the dose of the second step for preventing the through implantation of boron atoms is 1.2e13cm -3 The implantation energy is 38KeV; thirdly, adjusting the dosage of the phosphorus atoms implanted into the threshold voltage to be 0cm -3 The implantation energy is 0KeV, so that the channel length of the floating gate source follower is 450nm; the signal storage node is designed to be 1.8fF in capacitance, so that the voltage of the floating gate source electrode follower is 0.7V;
the spherical surface diameter of the pixel micro-optical structure is 4.0 mu m, the thickness of the pixel micro-optical structure is 1.0 mu m, and a layer of narrow-band-pass filter film with the passing wavelength of 808 +/-2.5 nm is plated on the micro-lens on the surface of the pixel, so that light of other wave bands except the wave band of the detection laser light source is filtered, and the signal-to-noise ratio of incident light is improved; the micro-optical structure adopts a pixel structure of a conical light channel to work in combination with a micro-lens, and the conical light channel structure is of an inverted trapezoidal structure;
the high-frequency cut-off frequency of the filter circuit is 200Hz, and the low-frequency cut-off frequency is 40Hz, so that the image sensor is ensured to only receive a detection laser image with the frequency of 120Hz, and a light reflection light image which is the same as the laser wavelength but different in frequency in other environment light is filtered, and the signal-to-noise ratio of the image is improved.
2. The beef cattle stereoscopic depth image acquisition system based on the laser array and the monocular camera as set forth in claim 1, wherein: the parameters of the beef cattle three-dimensional depth image acquisition system are as follows: adopting a VCSELs surface array invisible near-infrared laser with model CL-VCLB71AA, wavelength 808nm and lattice quantity of 3000, wherein the acquisition distance of the beef cattle body type image is more than or equal to 5m; the beef cattle body shape image acquisition height is more than or equal to 2 times of the beef cattle body shape height; camera field angle: the target which can cover the whole model cattle is more than or equal to 25 degrees; laser emission angle: not less than 20 degrees; the photographing angle of the camera is as follows: the angle perpendicular to the side of the beef cattle; working temperature: -40 ℃ to 50 ℃; the monocular global camera works in cooperation with the laser array structure light emitting unit, has high response rate to near infrared light, adopts a fixed-focus lens form, enables the system to be light in weight and small in size, and can ensure the stability of an optical axis in the using process.
3. The beef cattle stereoscopic depth image acquisition system based on the laser array and the monocular camera as set forth in claim 1, wherein: the image processing unit comprises a beef body size data acquisition subunit based on a laser dot matrix and a beef body weight estimation subunit based on the laser dot matrix; the beef cattle body ruler data acquisition subunit based on the laser dot matrix comprises: the system initializes and extracts a beef cattle effective outline module, a beef cattle body ruler depth information acquisition module, an acquisition module of other information based on the beef cattle body ruler depth information, an acquisition module of beef cattle body ruler parameters and a beef cattle body ruler data precision comparison analysis module based on image processing; the beef cattle weight estimation subunit obtains the predicted weight of the beef cattle by constructing a relation model of animal body scale parameters and the weight, and is provided with a research module of a beef cattle weight estimation scheme based on image processing, a weight estimation model establishing module based on the body scale data, a beef cattle weight estimation precision comparison analysis based on the image processing and a beef cattle feed real-time adjustment precision feeding module.
4. The system for acquiring the stereoscopic depth image of the beef cattle based on the laser array and the monocular camera according to claim 1, wherein: the abdomen width maximum point A calculation module for acquiring the body ruler depth information of the beef cattle comprises a region division submodule to which a minimum z value belongs, a distance calculation submodule between two points, a smooth minimum z value point curve module and an abdomen width maximum point A acquisition submodule; in the region where the minimum z value belongs, the smaller the distance between the two points is, the closer the distance between the point of the target object and the camera is, the larger the distance between the two points is, the farther the distance between the point of the target object and the camera is, the point of the target object closest to the camera is the minimum z value point, and the minimum z value point is the maximum abdominal width point A.
5. The system for acquiring the stereoscopic depth image of the beef cattle based on the laser array and the monocular camera according to claim 1, wherein: the shoulder end point D calculation module for acquiring the body ruler depth information of the beef cattle comprises a shoulder end point D region division submodule, a B and C special point identification submodule and a maximum distance point calculation submodule; the B and C special points are respectively arranged at two sides of a shoulder end point D and are respectively positioned at a junction where slopes of two sides of the shoulder end point D along the depth direction of the beef body ruler rise to fall; and the D point calculating submodule finds a point with the maximum distance according to the distance from each point on a B, C special point two-point connecting line to the D point, the point with the maximum distance is also the minimum point of the z value in the region to which the point belongs, and the obtained point with the minimum distance is the shoulder end point D.
6. The system for acquiring the stereoscopic depth image of the beef cattle based on the laser array and the monocular camera according to claim 1, wherein: the ischium node F calculation module for acquiring the body ruler depth information of the beef cattle comprises an ischium node F region division submodule, a k neighbor point z value calculation operator module and an ischium node F calculation submodule; and the ischial node F calculation submodule determines the ischial node F according to the maximum value of the z values of the k adjacent points, the maximum value of the obtained z value is the ischial node F, the point is basically located at the middle point of the cattle body, and the point can be used as a central point to calculate the abdomen width and the shoulder width of the beef cattle.
7. The system for acquiring the stereoscopic depth image of the beef cattle based on the laser array and the monocular camera according to claim 1, wherein: the milkvetch part highest point G calculation module based on the acquisition of other information of the beef body ruler depth information comprises a milkvetch part highest point G region division submodule, a calculation midpoint x coordinate calculation submodule, a plurality of slice point cloud submodules expanded along a midpoint x coordinate, a milkvetch part highest point G calculation submodule and a highest point submodule for searching a y axis, wherein the obtained highest point is the milkvetch part G;
body depth upper and lower endpoint H obtained based on other information of beef body ruler depth information up And H down Computing module, including bodyDividing a deep affiliated area into submodules, determining a reference point, expanding a plurality of slice point cloud submodules left and right along the reference point, calculating a highest point submodule of each slice point cloud, searching a highest point submodule of each slice y axis, and searching a lowest point submodule of each slice y axis, wherein the obtained y-axis highest point and the obtained lowest point of the y axis are upper and lower end points H of the body depth up And H down
8. The beef cattle stereoscopic depth image acquisition system based on the laser array and the monocular camera as set forth in claim 3, wherein: the research of the beef cattle weight estimation scheme based on image processing comprises a multiple linear regression model, a partial least square method and an RBF neural network model.
CN202210421003.7A 2022-04-21 2022-04-21 Beef cattle three-dimensional depth image acquisition system based on laser array and monocular camera Active CN114966733B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210421003.7A CN114966733B (en) 2022-04-21 2022-04-21 Beef cattle three-dimensional depth image acquisition system based on laser array and monocular camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210421003.7A CN114966733B (en) 2022-04-21 2022-04-21 Beef cattle three-dimensional depth image acquisition system based on laser array and monocular camera

Publications (2)

Publication Number Publication Date
CN114966733A CN114966733A (en) 2022-08-30
CN114966733B true CN114966733B (en) 2023-04-18

Family

ID=82978408

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210421003.7A Active CN114966733B (en) 2022-04-21 2022-04-21 Beef cattle three-dimensional depth image acquisition system based on laser array and monocular camera

Country Status (1)

Country Link
CN (1) CN114966733B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117268474A (en) * 2023-11-21 2023-12-22 江西中汇云链供应链管理有限公司 Device and method for estimating volume, number and weight of objects in scene

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6377353B1 (en) * 2000-03-07 2002-04-23 Pheno Imaging, Inc. Three-dimensional measuring system for animals using structured light
KR101293814B1 (en) * 2011-12-12 2013-08-06 성균관대학교산학협력단 Systems of estimating weight of chicken carcass and methods of estimating weight of chicken carcass
JP6083638B2 (en) * 2012-08-24 2017-02-22 国立大学法人 宮崎大学 Weight estimation apparatus for animal body and weight estimation method
CN105120257B (en) * 2015-08-18 2017-12-15 宁波盈芯信息科技有限公司 A kind of vertical depth sensing device based on structure light coding
CN110557527B (en) * 2018-06-04 2021-03-23 杭州海康威视数字技术股份有限公司 Camera and snapshot image fusion method
CN109978937A (en) * 2019-04-30 2019-07-05 内蒙古科技大学 A kind of ox body measurement system detected based on deep learning and characteristic portion
CN112001958B (en) * 2020-10-28 2021-02-02 浙江浙能技术研究院有限公司 Virtual point cloud three-dimensional target detection method based on supervised monocular depth estimation
CN112462389A (en) * 2020-11-11 2021-03-09 杭州蓝芯科技有限公司 Mobile robot obstacle detection system, method and device and electronic equipment
CN112907546B (en) * 2021-02-25 2024-04-05 北京农业信息技术研究中心 Non-contact measuring device and method for beef scale
CN113470106B (en) * 2021-07-14 2022-12-02 河南科技大学 Non-contact cow body size information acquisition method
CN113808156B (en) * 2021-09-18 2023-04-18 内蒙古大学 Outdoor cattle body ruler detection method and device
CN113920138A (en) * 2021-10-18 2022-01-11 华北水利水电大学 Cow body size detection device based on RGB-D camera and detection method thereof

Also Published As

Publication number Publication date
CN114966733A (en) 2022-08-30

Similar Documents

Publication Publication Date Title
Müller-Linow et al. The leaf angle distribution of natural plant populations: assessing the canopy with a novel software tool
Xu et al. Multispectral imaging and unmanned aerial systems for cotton plant phenotyping
CN101887589B (en) Stereoscopic vision-based real low-texture image reconstruction method
CA2764135C (en) Device and method for detecting a plant
US7587081B2 (en) Method for processing stereo vision data using image density
US8712144B2 (en) System and method for detecting crop rows in an agricultural field
US8855405B2 (en) System and method for detecting and analyzing features in an agricultural field for vehicle guidance
JP5020444B2 (en) Crop growth measuring device, crop growth measuring method, crop growth measuring program, and computer-readable recording medium recording the crop growth measuring program
US8737720B2 (en) System and method for detecting and analyzing features in an agricultural field
US20060095207A1 (en) Obstacle detection using stereo vision
Wu et al. Passive measurement method of tree diameter at breast height using a smartphone
CN103986854B (en) Image processing equipment, picture pick-up device and control method
CN107514745A (en) A kind of method and system of intelligent air condition stereoscopic vision positioning
CN113920106B (en) Corn growth vigor three-dimensional reconstruction and stem thickness measurement method based on RGB-D camera
CN114966733B (en) Beef cattle three-dimensional depth image acquisition system based on laser array and monocular camera
CN115272187A (en) Vehicle-mounted dynamic field frame-to-frame relevance based field rice and wheat lodging global evaluation method
CN112561983A (en) Device and method for measuring and calculating surface weak texture and irregular stacking volume
CN110969654A (en) Corn high-throughput phenotype measurement method and device based on harvester and harvester
Xiang et al. PhenoStereo: a high-throughput stereo vision system for field-based plant phenotyping-with an application in sorghum stem diameter estimation
Xiang et al. Measuring stem diameter of sorghum plants in the field using a high-throughput stereo vision system
CN104200469B (en) Data fusion method for vision intelligent numerical-control system
JP7300895B2 (en) Image processing device, image processing method, program, and storage medium
CN114972472A (en) Beef cattle three-dimensional depth image acquisition method based on laser array and monocular camera
CN115451965A (en) Binocular vision-based relative heading information detection method for transplanting system of rice transplanter
Zhang et al. A monocular vision-based diameter sensor for Miscanthus giganteus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: 101300 2-5, floors 1-2, building 3, yard 9, Yuxi Road, Shunyi District, Beijing

Patentee after: Beijing Shunxin Futong Big Data Group Co.,Ltd.

Country or region after: China

Address before: 101300 2-5, floors 1-2, building 3, yard 9, Yuxi Road, Shunyi District, Beijing

Patentee before: BEIJING FATOAN TECHNOLOGY GROUP Co.,Ltd.

Country or region before: China

CP03 Change of name, title or address