CN118154480A - Cultivation-combined farm sewage purifying treatment method and system - Google Patents

Cultivation-combined farm sewage purifying treatment method and system Download PDF

Info

Publication number
CN118154480A
CN118154480A CN202410535393.XA CN202410535393A CN118154480A CN 118154480 A CN118154480 A CN 118154480A CN 202410535393 A CN202410535393 A CN 202410535393A CN 118154480 A CN118154480 A CN 118154480A
Authority
CN
China
Prior art keywords
image
value
difference
target
taking
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410535393.XA
Other languages
Chinese (zh)
Other versions
CN118154480B (en
Inventor
贾锁斌
李桦
张青松
杨红红
刘蒙蒙
高丽
李树强
董琳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shaanxi Zhengneng Agriculture And Animal Husbandry Technology Co ltd
Original Assignee
Shaanxi Zhengneng Agriculture And Animal Husbandry Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shaanxi Zhengneng Agriculture And Animal Husbandry Technology Co ltd filed Critical Shaanxi Zhengneng Agriculture And Animal Husbandry Technology Co ltd
Priority to CN202410535393.XA priority Critical patent/CN118154480B/en
Publication of CN118154480A publication Critical patent/CN118154480A/en
Application granted granted Critical
Publication of CN118154480B publication Critical patent/CN118154480B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of image enhancement, in particular to a cultivation-combined method and a cultivation-combined system for purifying and treating sewage in a farm. According to the invention, a plurality of image frames in time sequence in the dirt sedimentation process are acquired and converted into an HSV color space, and firstly, a frame difference image is obtained by utilizing the gray value difference of pixel points, and an area image corresponding to dirt is identified. And calculating the position relation of pixel points and the hue and brightness change between the regional images of different frame difference images so as to understand the dirt movement mode and obtain the movement direction of the regional images. And calculating a smear degree value by analyzing hue difference and saturation between a pixel point in the regional image and a pixel in the motion opposite direction of the pixel point, and determining an adaptive Gaussian parameter. And processing the image by using the self-adaptive Gaussian parameters to obtain a clearer and more accurate enhanced image. Finally, the dirt purification monitoring is carried out according to the enhanced image, so that an accurate purification result can be obtained.

Description

Cultivation-combined farm sewage purifying treatment method and system
Technical Field
The invention relates to the technical field of image enhancement, in particular to a cultivation-combined method and a cultivation-combined system for purifying and treating sewage in a farm.
Background
The planting and breeding combination refers to the organic combination of crop planting and livestock breeding, straws produced by crops can be used as feed for livestock breeding, manure produced by breeding can be used as fertilizer for crop growth, and resource optimization and saving are realized through a green and environment-friendly planting mode. In the treatment process, the dirt is required to be separated through solid-liquid separation treatment, the sedimentation method is utilized to carry out primary treatment on the dirt, and then the flocculant or other solid-liquid separation methods are utilized to treat the dirt. If the subsequent treatment of sedimentation is not carried out in time, insufficient fertilizer can be caused, and therefore, the sedimentation effect needs to be analyzed by monitoring the sedimentation tank.
In the process of monitoring the sedimentation tank, the situation of motion blur and the like exists in the acquired image due to the suspension movement of dirt, so that the image needs to be enhanced, and the image is enhanced by using fixed Gaussian parameters in the prior art, but because the dirt in the sedimentation tank is complex in composition and different in sedimentation progress at different moments, the state in the sedimentation tank can be changed, so that the image enhancement effect is poor due to the fact that the fixed Gaussian parameters are used for image enhancement, and the accuracy of the dirt purification monitoring result is affected.
Disclosure of Invention
In order to solve the technical problem that the accuracy of a sewage purification detection result is insufficient because the phenomenon that the sedimentation progress is inconsistent with the sedimentation tank state at different moments is not considered when fixed Gaussian parameters are used for image enhancement, the invention aims to provide a cultivation-combined sewage purification treatment method and system for a farm, and the adopted technical scheme is as follows:
in time sequence, acquiring a plurality of image frames in the dirt sedimentation process, wherein the image frames comprise current time image frames and a plurality of historical image frames, and converting all the image frames into an HSV color space;
Obtaining a frame difference image according to the gray value difference of pixel points at the same position among all the image frames; carrying out connected domain analysis on each frame difference image to obtain an area image corresponding to dirt; determining a target image and a contrast image according to the time sequence relation of the frame difference image, and taking a region image in the target image as a target region; obtaining an angle corresponding to the movement direction of the target area according to the position relation between the target image and the area image in the contrast image, the change condition of the hue value and the change condition of the brightness value;
In the current time image frame, analyzing the difference of hue values between the pixel points in the corresponding area of the target area and other pixel points in the opposite direction of the movement direction of the target area and the saturation of the pixel points to obtain a smear degree value of the current time image frame; obtaining self-adaptive Gaussian parameters according to the discrete condition of the motion directions of all target areas, the smear degree value of the image frames at the current moment and the fluctuation condition of the saturation among pixels in the opposite direction of the motion directions of all target areas;
Performing enhancement processing on the current moment image frame according to the self-adaptive Gaussian parameters to obtain an enhanced image; and performing dirt purification monitoring according to the enhanced image to obtain a dirt purification result at the current moment.
Further, the determining the target image and the contrast image according to the time sequence relation of the frame difference image includes:
and arranging all the frame difference images according to a time sequence to obtain a sequencing sequence, taking the last frame difference image in the sequencing sequence as a target image, and taking other frame difference images except the last frame difference image in the sequencing sequence as comparison images.
Further, the method for acquiring the angle corresponding to the movement direction comprises the following steps:
acquiring a hue value average value and a brightness value average value of all pixel points in each regional image in the target image and each contrast image;
optionally selecting a target area as a region to be detected, and selecting an area image closest to the centroid distance between the regions to be detected as a matching area in each contrast image;
In each contrast image, taking the centroid of a matching area corresponding to the area to be detected as a starting point, taking rays at the same position of the centroid of the area to be detected, and taking an included angle between the rays and the horizontal direction in the anticlockwise direction as a movement angle between each matching area and the area to be detected;
Taking the difference of the hue value mean value of the region to be detected and each matching region as a hue difference value; taking the difference of the brightness value mean value of the region to be detected and each matching region as a brightness difference value; obtaining a difference factor according to the hue difference value and the brightness difference value, wherein the hue difference value and the brightness difference value are positively correlated with the difference factor, and the difference factor takes the value of a normalized numerical value; taking the value of the difference factor subjected to negative correlation mapping as a direction weight;
Taking the product of the direction weight corresponding to each matching region and the motion angle corresponding to each matching region as a direction angle factor corresponding to each matching region; and taking the average value of the direction angle factors corresponding to all the matching areas as the angle corresponding to the movement direction of the area to be detected.
Further, the method for obtaining the smear degree value includes:
Taking the area of the same position of each target area in the image frame at the current moment as an analysis area;
In the current moment image frame, taking each pixel point in the analysis area as a target point; for each target point, sequentially traversing pixel points in the opposite direction of the motion direction corresponding to the analysis area to which the target point belongs by taking each target point as a starting point, and taking the traversed pixel points each time as to-be-detected points corresponding to the target point;
Under each traversal, obtaining a smear factor corresponding to each target point under the current traversal number according to the difference of hue values between each target point and the corresponding point to be detected and the saturation value of the point to be detected; taking the average value of the smear factors of all target points under the current traversal times as the smear characteristic value of the current time image frame;
When the characteristic value of the smear of the image frame at the current time is smaller than the characteristic value of the smear of the image frame at the current time, the last time of traversal is used as the reference time, the traversed pixel point corresponding to each target point is used as the pixel point of the smear, and the characteristic value of the smear of the image frame at the current time is used as the value of the smear degree of the image frame at the current time.
Further, the method for acquiring the smear factor comprises the following steps:
for any target point, carrying out negative correlation mapping on the difference of hue values between the target point and the point to be measured, and taking the normalized value as an adjustment weight; and taking the product of the value obtained after carrying out negative correlation mapping on the saturation value of the to-be-measured point and the corresponding adjustment weight as a smear factor corresponding to the target point under the current traversal times.
Further, the method for acquiring the adaptive Gaussian parameters comprises the following steps:
in the image frame at the current time, normalizing and inversely correlating the information entropy of angles corresponding to the motion directions of all analysis areas, and taking the product of the information entropy and the smear degree value of the image frame at the current time as an adjustment coefficient;
in the current time image frame, taking the standard deviation of the saturation of the smear pixel points corresponding to all target points in each analysis area as a fuzzy factor, and taking the average value of the fuzzy factors of all the analysis areas as a fuzzy characteristic value of the current time image frame;
and taking the product of the fuzzy characteristic value and the adjustment coefficient as the adaptive Gaussian parameter.
Further, the method for acquiring the enhanced image comprises the following steps:
Acquiring the initial fuzzy kernel size according to the reference times;
the sum of elements in the initial fuzzy core is 1 through the self-adaptive Gaussian parameters, so that an updated fuzzy core is obtained, and the element distribution accords with Gaussian distribution formed by the Gaussian parameters;
and carrying out enhancement processing on the image of the current moment image frame in the RGB space by using a non-blind deconvolution method according to the updated fuzzy core to obtain an enhanced image.
Further, the performing the dirt purifying and monitoring according to the enhanced image to obtain a dirt purifying result at the current moment includes:
And taking the enhanced image as input of a pre-trained neural network, and outputting a dirt purifying result at the current moment.
Further, the initial blur kernel size is set to:
an odd number that is greater than the reference number and that is the smallest difference from the reference number is taken as the side length of the initial blur kernel.
The invention also provides a breeding combined farm sewage purifying treatment system, which comprises:
A memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of any one of the methods when the computer program is executed.
The invention has the following beneficial effects:
The method comprises the steps of firstly, acquiring a plurality of image frames in the process of dirt sedimentation in time sequence, wherein the image frames comprise current time image frames and historical image frames; in view of complex dirt types in the sedimentation tank, the difficulty of accurately distinguishing colors in RGB space is high, so that all image frames are subjected to color space conversion from HSV space. Because the sedimentation process is a dynamic process, the dirt solid in the sedimentation process moves, a frame difference image can be obtained and an area image corresponding to dirt can be screened based on the gray value difference of pixel points at the same position in adjacent image frames; and then determining a target image and a contrast image according to the time sequence relation of the frame difference image, and taking the area in the target image as a target area. The position relation between the regional images can reflect the movement condition of the dirt, and the hue value and the brightness value can provide information about whether the regional images are consistent or not, so that the position relation, the hue value and the brightness value change condition between the target region and the regional images in the comparison image are calculated, each target region is obtained, namely the movement direction of the dirt, and the movement direction of the dirt is calculated, so that the movement mode of the dirt in the sedimentation tank is well understood, and a reference is provided for the follow-up calculation of the smear degree of the dirt. The smear is the image distortion caused by the motion of a moving object and directly affects the overall quality of an image, so that in the image frame at the current moment, pixel points in a corresponding area of a target area and pixel points in the opposite direction of the motion direction are analyzed, the difference of hue values among the pixel points is compared, and the saturation of the pixel points is combined to calculate the smear degree value of the image frame at the current moment. In view of the fact that the motion directions of different dirt generally show an irregular characteristic, in order to be more suitable for the motion mode of the dirt in the current time image frame, the discrete condition of the motion direction of the target area, the smear degree value of the current time image frame and the fluctuation condition of the saturation of the pixel point in the reverse direction of the motion are analyzed, and the self-adaptive Gaussian parameter of the current time image frame is obtained. And finally, carrying out enhancement processing on the current time image frame according to the obtained self-adaptive Gaussian parameters, and obtaining a more accurate enhanced image. Finally, the dirt purification monitoring is carried out according to the enhanced image, so that a more accurate dirt purification result at the current moment can be obtained.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions and advantages of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are only some embodiments of the invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method for purifying plant sewage by planting and combining according to an embodiment of the present invention;
fig. 2 is a flowchart of a method for acquiring an angle corresponding to a movement direction according to an embodiment of the present invention.
Detailed Description
In order to further describe the technical means and effects adopted by the invention to achieve the preset aim, the following detailed description refers to the specific implementation, structure, characteristics and effects of a cultivation-combined farm sewage purifying treatment method and system according to the invention by combining the accompanying drawings and the preferred embodiment. In the following description, different "one embodiment" or "another embodiment" means that the embodiments are not necessarily the same. Furthermore, the particular features, structures, or characteristics of one or more embodiments may be combined in any suitable manner.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The invention provides a cultivation combined farm sewage purifying treatment method and a system specific scheme by combining the drawings.
Referring to FIG. 1, a method flow chart of a method for purifying and treating plant sewage by combining cultivation according to one embodiment of the invention is shown, the method comprises the following steps:
Step S1: in time sequence, a plurality of image frames in the dirt sedimentation process are acquired, the image frames comprise the current time image frame and a plurality of historical image frames, and all the image frames are converted into HSV color space.
As the sedimentation process is a dynamic process, the dirt solid moves and the positions change at different moments, a certain motion blur exists in the acquired image, and the acquired image needs to be enhanced, so that the accuracy of the dirt purifying monitoring result is improved.
An underwater camera is arranged in the sedimentation tank and used for monitoring the sedimentation progress in real time, and as the motion state cannot be visually displayed from one image, a plurality of image frames in the dirt sedimentation process are acquired in time sequence, wherein the image frames comprise the current moment image frame and a plurality of historical image frames. In addition, in view of the turbidity in the sedimentation tank, the color characteristics under the RGB space are less obvious, and because the different positions have different dirt concentrations and different influence degrees of refraction, scattering and the like of light rays, the motion condition of an object can be more easily tracked based on analysis of the HSV space, so that the color space conversion is carried out on all image frames, and the image frames under the HSV space are obtained.
It should be noted that, the device for acquiring the image frame may be adjusted according to the implementation scene, which is not limited herein, and the conversion of the image color space is a technical means known to those skilled in the art, and will not be described herein.
Step S2: obtaining a frame difference image according to the gray value difference of pixel points at the same position among all the image frames; carrying out connected domain analysis on each frame difference image to obtain an area image corresponding to dirt; determining a target image and a contrast image according to the time sequence relation of the frame difference image, and taking a region image in the target image as a target region; and obtaining an angle corresponding to the movement direction of the target area according to the position relation between the target image and the regional image in the contrast image, the change condition of the hue value and the change condition of the brightness value.
By comparing the differences in gray values of the pixels at the same location in different image frames, areas where the gray values change, which areas tend to correspond to moving dirt, can be effectively highlighted. Therefore, the frame difference image can be obtained by analyzing the gray value difference of the pixel points in different image frames. The specific method comprises the following steps: taking the current time image frame and the rest of other historical image frames except the first historical image frame as target frames, and acquiring a frame difference image of each target frame and the previous historical image frame adjacent in time sequence based on a frame difference method. If there are 5 frames in the current time frame and the historical image frame, there are 4 frame difference images.
Since the gray value of the pixel point of the stationary object is 0 on the frame difference image, the region where the pixel point with the gray value of 0 is located is taken as the background region, and the region where the pixel point with the gray value of 0 is located can be regarded as the foreground region because the contour of the moving object, particularly the moving object, is displayed as non-0 because of the gray change.
For any one frame difference image, the frame difference image is subjected to connected domain analysis, and the obtained connected domain is used as a region image, and each region image can be regarded as a dirt solid.
It should be noted that, the frame difference method and the connected domain analysis are all technical means well known to those skilled in the art, and are not described herein.
In other embodiments of the present invention, when distinguishing the foreground area from the background area, a gray threshold may be set, where a pixel having a gray value greater than the gray threshold is regarded as a pixel in the foreground area, the gray threshold is set as a mean value of gray values of all pixels in the frame difference image, and the specific value may be adjusted according to the implementation scene, which is not limited herein.
Therefore, the corresponding area image of the dirt solid in each frame difference image can be obtained, and in the subsequent process, the area image can be analyzed, so that the smear condition of the dirt solid in the current time image frame is quantified, and the quality of the current time image frame is evaluated.
Before determining the smear condition of the dirt solid, the motion direction of the dirt solid should be determined first, and the position relationship of the region image in different frame difference images can represent the motion condition of the dirt solid, and the change of hue values and brightness values of the region image in different frame difference images can help to determine whether the region image is the region image corresponding to the same dirt solid.
Because each frame difference image has an area image, and the positions of the area images have slight differences, in order to more accurately represent dirt solids in the current time image frame, the target image and the contrast image need to be determined according to the time sequence relation of the frame difference images.
Preferably, in one embodiment of the present invention, the method for distinguishing the target image from the contrast image includes:
And arranging all the frame difference images according to the time sequence to obtain a sequencing sequence, taking the last frame difference image in the sequencing sequence as a target image, taking other frame difference images except the last frame difference image in the sequencing sequence as comparison images, and taking the region image in the target image as a target region. The reason is that: the more the region image in the frame difference image closest in time sequence to the current time is capable of characterizing the state of the dirt solid in the current time image.
Preferably, in one embodiment of the present invention, the method for acquiring an angle corresponding to a movement direction includes:
Referring to fig. 2, a flowchart of a method for acquiring an angle corresponding to a movement direction according to an embodiment of the invention is shown, and the method includes the following steps:
Step S201: and determining a matching region of the target region and a movement angle between the target region and a corresponding matching region according to the position relationship between the target region and the region image in the contrast image.
In order to facilitate comparison of the positional relationship of the area images in the different frame difference images, in the embodiment of the present invention, a two-dimensional coordinate system is established in all the frame difference images according to the same method, and the centroid of each area image in all the frame difference images is acquired. For example, each pixel point has a shape like that of the original point at the lower left corner of the frame difference image, the horizontal axis is the horizontal right direction, and the vertical axis is the vertical axisIs a two-dimensional coordinate of (c).
Then, in the target image, for convenience of explanation and explanation, a target area is selected as the area to be measured, and in each comparison image, an area image closest to the centroid distance between the areas to be measured is used as a matching area, wherein the matching area is the area most likely to represent the same dirt solid as the area to be measured.
Finally, in each contrast image, taking the mass center of the matching area corresponding to the area to be detected as a starting point, taking rays at the same position of the mass center of the area to be detected, taking the included angle between the rays and the horizontal direction in the anticlockwise direction as the movement angle between each matching area and the area to be detected, and recording as
It should be noted that, the method for obtaining the centroid is a process well known to those skilled in the art, and will not be described herein.
Step S202: and obtaining the movement direction weight of the target area according to the hue value difference and the brightness value difference between the target area and the corresponding matching area.
Since the probability of a large change in the light conditions in the sedimentation tank is low, there is no significant difference in hue and brightness values of the same dirt solid even at different times, so that the brightness and hue value features are preferentially used when comparing the degree of similarity between the region image and the matching region.
And acquiring the hue value average value and the brightness value average value of all pixel points in each regional image in the target image and each contrast image.
Then, in order to analyze the similarity between the region to be detected and the matching regions, taking the difference of the hue value mean value of the region to be detected and each matching region as a hue difference value; obtaining a difference factor according to the hue difference value and the brightness difference value, wherein the hue difference value and the brightness difference value are positively correlated with the difference factor, and the difference factor is a normalized numerical value; taking the value of the difference factor subjected to negative correlation mapping as a direction weight; the direction weight can be regarded as the reliability of the motion angle calculated in step S201.
The region to be measured is used for the region image in the target imageFor example, the formula model of the direction weight may specifically be, for example:
Wherein, Representing the region to be measured/>And/>The direction weights between the matching areas; /(I)Representing the region to be measured/>Hue value average value of (2); /(I)Represents the/>The hue value average value of each matching region; /(I)Representing the region to be measured/>Is the average value of the brightness values; /(I)Represent the firstThe average value of the brightness values of the matching areas; /(I)Representing the normalization function.
In the formula model of the direction weight, calculating the difference of the brightness value mean value of the region to be detected and each matching region as a brightness difference valueWhen the value is smaller, the region to be detected and the matching region are indicated to have the characteristics of more consistent brightness values, and the probability that the region to be detected and the matching region represent the same dirt solid is higher; similarly, calculating the difference of the hue value mean value of the region to be detected and each matching region as a hue difference value/>The smaller the value, the more consistent the hue value characteristics of the region to be tested and the matching region, and similarly, the greater the probability that the two represent the same dirt solid. Therefore, the brightness difference value and the hue difference value are integrated, and the product of the brightness difference value and the hue difference value is normalized to obtain a difference factorAnd then carrying out negative correlation mapping on the difference factors to realize logic relation correction, thereby obtaining the direction weight between the region to be detected and each matching region, wherein the direction weight can be used as the credibility of the motion angle calculated in the step S201.
It should be noted that, since the hue difference value and the brightness difference value are both positively correlated with the difference factor, in other embodiments of the present invention, a normalized value of the sum of the hue difference value and the brightness difference value may be used as the difference factor, and then the difference factor is negatively correlated and mapped to obtain the direction weight, and the specific method is not limited herein.
Step S203: and obtaining the angle corresponding to the movement direction of the target area according to the movement direction weight and the movement angle of the target area.
Taking the product of the direction weight corresponding to each matching region and the motion angle corresponding to each matching region as a direction angle factor corresponding to each matching region; and taking the average value of the direction angle factors corresponding to all the matching areas as the angle corresponding to the movement direction of the area to be detected.
The region to be measured is a target regionFor example, the formula model of the angle corresponding to the movement direction may specifically be, for example:
Wherein, Representing the region to be measured/>An angle corresponding to the direction of motion of the vehicle; /(I)Representing the region to be measured/>And/>The direction weights between the matching areas; /(I)Representing the region to be measured/>And/>The motion angle between the matching areas; /(I)Representing the total number of matching regions.
In the formula model of the angle corresponding to the movement direction, the direction weight between the region to be detected and the matching region is used as the credibility, and the direction weightThe larger the probability that the matching area and the area to be measured represent the same dirt solid is, the higher the probability is, so that the ratio of the movement angle between the matching area and the area to be measured is increased when the corresponding angle of the movement direction of the area to be measured is calculated, otherwise, if the direction weight/>The smaller the probability that the matching region and the region to be measured represent the same dirt solid is, the lower the probability is, so that the ratio of the movement angle between the matching region and the region to be measured should be reduced when calculating the corresponding angle of the movement direction of the region to be measured. The product of the direction weight and the motion angle between the region to be measured and each matching region is taken as the direction angle factor/>And finally, taking the average value of the direction angle factors of all the matching areas corresponding to the area to be detected as the angle corresponding to the movement direction of the area to be detected.
By the method, the corresponding angle of the motion direction of each region image in the target region can be obtained.
Step S3: in the current time image frame, analyzing the difference of hue values between the pixel points in the corresponding area of the target area and other pixel points in the opposite direction of the movement direction of the target area and the saturation of the pixel points to obtain a smear degree value of the current time image frame; and obtaining the self-adaptive Gaussian parameters according to the discrete condition of the motion directions of all the target areas, the smear degree value of the image frames at the current moment and the fluctuation condition of the saturation among the pixels in the opposite direction of the motion directions of all the target areas.
When a moving object is photographed, a blurred motion artifact, that is, a smear, is usually generated at the tail end of the object in the motion direction of the object, which means that the lower the quality of the image at this time, the greater the adjustment is needed in the subsequent image enhancement, so that the smear degree of the image frame at the current time needs to be evaluated.
Because in the image with motion blur smear, smear usually appears in a virtual form, the difference between the pixel point of the area where each dirt solid is located and the pixel point in the opposite direction of the motion direction of the pixel point can be quantified, meanwhile, compared with the actual dirt solid, the pixel point belonging to the smear area has no too large difference in hue value, but the saturation of the pixel point is lower, so that the smear degree value of the current moment image frame is obtained by analyzing the difference in hue value between the pixel point in the area image and other pixel points in the opposite direction of the motion direction of the area image and the saturation of the pixel point.
Preferably, in one embodiment of the present invention, the method for acquiring the smear level value includes:
Since the region image representing the dirt solid is in the frame difference image and in order to determine the smear level value of the current time image frame, the analysis should be performed in the current time image frame, the region of the target region at the same position in the current time image frame is taken as the analysis region.
In view of the fact that smear is not an infinite continuation, it has certain boundaries and termination points. Therefore, in this embodiment of the present invention, in order to determine the smear pixel point and the smear degree value, a method of traversing the pixel point in the opposite direction to the moving direction of the analysis area is adopted.
In the current time image frame, taking each pixel point in the analysis area as a target point, for each target point, taking each target point as a starting point, sequentially traversing the pixel points in the opposite direction of the motion direction corresponding to the analysis area to which the target point belongs, and taking the pixel points traversed each time as the to-be-detected points corresponding to the target points. For example, the angle of the direction of motion corresponding to the analysis area is 45 degrees, i.e., the northeast direction, and the opposite direction should be the southwest direction, i.e., 225 degrees.
Then under each traversal, obtaining a smear factor corresponding to each target point under the current traversal number according to the difference of hue values between each target point and the corresponding point to be detected and the saturation value of the point to be detected; the calculation method of the smear factor comprises the following steps: for any target point, carrying out negative correlation mapping on the difference of hue values between the target point and the point to be measured, and taking the normalized value as an adjustment weight; and taking the product of the value obtained after carrying out negative correlation mapping on the saturation value of the to-be-measured point and the corresponding adjustment weight as a smear factor corresponding to the target point under the current traversal times.
And taking the average value of the smear factors of all the target points under the current traversal times as the smear characteristic value of the image frame at the current time. The formula model of the smear feature value may specifically be, for example:
Wherein, Representing traversal No./>The smear characteristic value of the image frame at the current moment is obtained; /(I)Representing a total number of analysis areas in the image frame at the current time; /(I)Represents the/>Total number of target points in the individual analysis areas; /(I)Represents the/>First/>, in the analysis regionHue values of the individual target points; /(I)Representing traversal No./>Second time, the/>First/>, in the analysis regionHue values of the points to be measured corresponding to the target points; /(I)Representing traversal No./>Second time, the/>First/>, in the analysis regionSaturation values of the points to be measured corresponding to the target points; /(I)Expressed as natural constant/>An exponential function of the base.
In the formula model of the smear characteristic value, all target points should be performed simultaneously when performing traversal, so for each target point, when traversing for the first time, taking each target point as a starting point, taking the first pixel point in the reverse direction of the motion of the target point and the analysis area to which the target point belongs as a target point, when the saturation value of the target point is smaller, the saturation value of the target point is more likely to be a smear pixel point, so that the saturation value of the target point is subjected to negative correlation mapping to realize logic relation correction, and the result is obtainedMeanwhile, in order to avoid the interference of the background low saturation region on the calculation process, the hue value difference/>, between the target point and the point to be measured, can be calculatedAs a weight, the larger the value is, the more unmatched between the to-be-measured point and the corresponding target point is indicated, namely, the possibility that the to-be-measured point is a trailing pixel point of the target point is lower, otherwise, the smaller the value is, the more likely the current to-be-measured point is a trailing pixel point of the target point, so that the hue value difference is subjected to negative correlation mapping and normalization processing, logic relation correction is realized, and the adjusted weight/>. Then the adjusting weight corresponding to the point to be measured and the/>, of the point to be measuredThe multiplied values are taken as the smear factors corresponding to each target point in the first traversalAnd taking the average value of the smear factors of all target points in all analysis areas in the image frame at the current time as the smear characteristic value of the image frame at the current time under the first traversal. Similarly, in the second traversal, taking each target point as a starting point, taking a second pixel point in the reverse direction of the motion of the target point and the analysis area to which the target point belongs as a to-be-detected point, and then performing the analysis to obtain a smear characteristic value of the image frame at the current time under the second traversal, and so on, third traversal, fourth traversal and the like.
Since the smear is bordered and the smear degree gradually decreases, in the above-mentioned traversal process, with the increase of the traversal times, when the point to be measured is the smear pixel point of the target point, the smear feature value will become larger, otherwise, if the point to be measured is not the smear pixel point of the target point, the smear feature value will become smaller, so when the smear feature value of the image frame at the current time is traversed next time, it is smaller than the smear feature value of the image frame at the current time, the last traversal times are used as the reference times, the traversed pixel point corresponding to each target point is used as the smear pixel point, and the smear feature value of the image frame at the current time is traversed last as the smear degree value of the image frame at the current time, and this is recorded as
In other embodiments of the present invention, when the traversal is stopped, the average value of the smear feature values of the image frame at the current time under the previous traversal times may be used as the smear degree value of the image frame at the current time.
Therefore, the smear degree value of the image frame at the current time can be obtained, and the quality of the image frame at the current time can be measured.
Since the dirt distribution in the sedimentation tank is complex and the moving direction is generally irregular, and a relatively ideal result can be obtained by the Gaussian distribution setting parameter for the fuzzy kernel, in the embodiment of the invention, the self-adaptive Gaussian parameter can be more suitable for the moving mode of the dirt in the current time image frame and the condition in the current time image frame by analyzing the discrete condition of the moving direction of the region image, the smear degree value of the current time image frame and the fluctuation condition of the saturation among pixels in the opposite direction of the moving direction of the region image, namely the fluctuation condition of the saturation of the traversing pixels in the process, so as to obtain a better enhancement effect in the subsequent process.
Preferably, in one embodiment of the present invention, the method for acquiring the adaptive gaussian parameter includes:
Firstly, in the current time image frame, normalizing the information entropy of angles corresponding to the motion directions of all analysis areas, and inversely correlating the mapped values, wherein the product of the mapped values and the smear degree value of the current time image frame is used as an adjustment coefficient.
And then in the current time image frame, taking the standard deviation of the saturation of the corresponding smear pixel points of all target points in each analysis area as a blurring factor, and taking the average value of the blurring factors of all the analysis areas as a blurring characteristic value of the current time image frame.
And finally taking the product of the fuzzy characteristic value and the adjustment coefficient as the self-adaptive Gaussian parameter. The formula model of the adaptive Gaussian parameters comprises:
Wherein, Representing adaptive gaussian parameters; /(I)Information entropy indicating angles corresponding to the motion directions of all analysis areas; /(I)The number of types of angles corresponding to the motion directions of all the analysis areas is represented, and the same angle is used as one type; /(I)A smear degree value representing the image frame at the current time; /(I)Representing a total number of analysis areas in the image frame at the current time; /(I)Represents the/>Standard deviation of saturation of smear pixel points corresponding to all target points in each analysis area; /(I)A logarithmic function with a base of 2 is shown.
In the formula model of the adaptive gaussian parameter, when the smear level value of the image frame at the current time is larger, in order to improve the deblurring effect, a larger gaussian parameter should be used. And the information entropy of angles corresponding to the motion directions of all analysis areas in the image frame at the current time is calculatedThe smaller the value, which indicates a smaller difference in the direction of motion of the dirt solids in the image frame at the current time, and also indicates a more consistent and uniform blurring pattern in the image frame, at which time a larger gaussian parameter should be used in order to more fully cover and compensate for this uniform blurring pattern, and therefore the value is normalized, in this embodiment of the invention, by using the entropy limit, and then the normalized value/>Performing negative correlation mapping, and multiplying the corrected logic relationship with the smear degree value to obtain an adjustment coefficient/>. Then, the standard deviation of the saturation of the smear pixel point corresponding to the pixel point in each analysis area is calculated, and the value is taken as a blurring factor/>The larger the value is, the more discrete the saturation distribution among the smear pixel points is, the more obvious the fluctuation condition is, namely the current blurring degree is larger, the larger Gaussian parameters are also needed, the blurring factors of all analysis areas are integrated, and the average value is taken as the blurring characteristic value of the image frame at the current moment. And finally taking the product of the fuzzy characteristic value and the adjustment coefficient as the adaptive Gaussian parameter of the image frame at the current moment.
So far, the adaptive gaussian parameters required in the subsequent image enhancement process can be obtained.
Step S4: performing enhancement processing on the current time image frame according to the adaptive Gaussian parameters to obtain an enhanced image; and performing dirt purification monitoring according to the enhanced image to obtain a dirt purification result at the current moment.
Because the adaptive gaussian parameters acquired in step S3 are more consistent with the situation in the current time image frame, the enhancement processing is performed on the current time image frame based on the adaptive gaussian parameters, and the obtained enhancement image is more accurate and clear.
Preferably, in one embodiment of the present invention, the enhanced image acquisition method includes:
Since the processing speed of the non-blind deconvolution method is high and ringing over-artifact is not easy to generate, in the embodiment of the invention, the non-blind deconvolution method is used for image enhancement.
Firstly, determining the size of a blurring kernel according to the requirement, and in order to improve the image enhancement effect, the size of the blurring kernel should be as close as possible to the size of a blurring region, and the size of the blurring region can be determined by using the number of traversal times when determining the smear degree value in the step S3, so that an odd number which is larger than the reference times and has the smallest difference with the reference times is used as the side length of the initial blurring kernel. For example, when the reference number is 5, the side length of the initial blur kernel should be 7, that is, the length and width of the initial blur kernel should be 7 each, and when the reference number is 4, the side length of the initial blur kernel should be 5, that is, the length and width of the initial blur kernel should be 5 each.
And then, enabling the sum of elements in the initial fuzzy core to be 1 through the self-adaptive Gaussian parameters to obtain an updated fuzzy core, wherein the element distribution accords with Gaussian distribution formed by the Gaussian parameters.
And finally, carrying out enhancement processing on the image of the current moment image frame in the RGB space by using a non-blind deconvolution method according to the updated fuzzy core to obtain a corresponding enhanced image.
It should be noted that, the size of the blur kernel may be adjusted according to the implementation scenario, which is not limited herein; the operation of using gaussian parameters to make the sum of the elements in the fuzzy kernel 1 is a process well known to those skilled in the art, and will not be described in detail herein; likewise, the non-blind deconvolution method is a technical means well known to those skilled in the art, and will not be described in detail herein.
The deblurred enhanced image is obtained, and the enhanced image can be analyzed, so that the pollution purification monitoring is realized, and the pollution purification result at the current moment is obtained.
Preferably, in one embodiment of the present invention, the method for performing the soil purifying and monitoring according to the enhanced image to obtain the soil purifying result at the current moment includes:
And taking the enhanced image as the input of a pre-trained neural network, and outputting the dirt purifying result at the current moment.
It should be noted that, the training process of the neural network is a technical means well known to those skilled in the art, and is not described in detail herein, and the training process of the neural network is briefly described as follows: and a large number of marked sedimentation process images are obtained by utilizing the big data, and the sedimentation process can be divided into various processes such as 0%, 20%, 40%, 60%, 80%, 100% and the like. All sedimentation process images were then taken according to 7:3, dividing the ratio into a training set and a verification set, training by using a DNN neural network, training the loss function as a cross entropy loss function, training by using a minimum gradient method until the loss function converges, finishing the training, and inputting the training into the verification set for verification.
When the output sewage purification result is 100%, prompt information is sent out, so that workers can conveniently and timely carry out sedimentation subsequent treatment.
It should be noted that, in order to facilitate the operation, all index data involved in the operation in the embodiment of the present invention is subjected to data preprocessing, so as to cancel the dimension effect. The specific means for removing the dimension influence is a technical means well known to those skilled in the art, and is not limited herein.
The present embodiment also provides a combined farm waste cleaning system comprising a memory, a processor and a computer program, wherein the memory is configured to store a corresponding computer program, the processor is configured to execute the corresponding computer program, and the computer program is configured to implement the steps of any one of the combined farm waste cleaning methods when executed on the processor.
In summary, the embodiment of the invention firstly acquires a plurality of image frames in the process of dirt sedimentation in time sequence, including the current time image frame and the historical image frame; in view of the complex types of dirt in the sedimentation tank, the difficulty of accurately distinguishing from RGB space is high, so that all image frames are subjected to color space conversion from HSV space. Because the sedimentation process is a dynamic process, the dirt solid in the sedimentation process moves, a frame difference image can be obtained and an area image corresponding to dirt can be screened based on the gray value difference of pixel points at the same position in adjacent image frames; and then determining a target image and a contrast image according to the time sequence relation of the frame difference image, and taking the area in the target image as a target area. The position relation between the regional images can reflect the movement condition of the dirt, and the hue value and the brightness value can provide information about whether the regional images are consistent or not, so that the position relation, the hue value and the brightness value change condition between the target region and the regional images in the comparison image are calculated, each target region is obtained, namely the movement direction of the dirt, and the movement direction of the dirt is calculated, so that the movement mode of the dirt in the sedimentation tank is well understood, and a reference is provided for the follow-up calculation of the smear degree of the dirt. The smear is the image distortion caused by the motion of a moving object and directly affects the overall quality of an image, so that in the image frame at the current moment, pixel points in a corresponding area of a target area and pixel points in the opposite direction of the motion direction are analyzed, the difference of hue values among the pixel points is compared, and the saturation of the pixel points is combined to calculate the smear degree value of the image frame at the current moment. In view of the fact that the motion directions of different dirt generally show an irregular characteristic, in order to be more suitable for the motion mode of the dirt in the current time image frame, the discrete condition of the motion direction of the target area, the smear degree value of the current time image frame and the fluctuation condition of the saturation of the pixel point in the reverse direction of the motion are analyzed, and the self-adaptive Gaussian parameter of the current time image frame is obtained. And finally, carrying out enhancement processing on the current time image frame according to the obtained self-adaptive Gaussian parameters, and obtaining a more accurate enhanced image. Finally, the dirt purification monitoring is carried out according to the enhanced image, so that a more accurate dirt purification result at the current moment can be obtained.
It should be noted that: the sequence of the embodiments of the present invention is only for description, and does not represent the advantages and disadvantages of the embodiments. The processes depicted in the accompanying drawings do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments.

Claims (10)

1. A method for purifying and treating a plant's soil in combination with planting, the method comprising:
in time sequence, acquiring a plurality of image frames in the dirt sedimentation process, wherein the image frames comprise current time image frames and a plurality of historical image frames, and converting all the image frames into an HSV color space;
Obtaining a frame difference image according to the gray value difference of pixel points at the same position among all the image frames; carrying out connected domain analysis on each frame difference image to obtain an area image corresponding to dirt; determining a target image and a contrast image according to the time sequence relation of the frame difference image, and taking a region image in the target image as a target region; obtaining an angle corresponding to the movement direction of the target area according to the position relation between the target image and the area image in the contrast image, the change condition of the hue value and the change condition of the brightness value;
In the current time image frame, analyzing the difference of hue values between the pixel points in the corresponding area of the target area and other pixel points in the opposite direction of the movement direction of the target area and the saturation of the pixel points to obtain a smear degree value of the current time image frame; obtaining self-adaptive Gaussian parameters according to the discrete condition of the motion directions of all target areas, the smear degree value of the image frames at the current moment and the fluctuation condition of the saturation among pixels in the opposite direction of the motion directions of all target areas;
Performing enhancement processing on the current moment image frame according to the self-adaptive Gaussian parameters to obtain an enhanced image; and performing dirt purification monitoring according to the enhanced image to obtain a dirt purification result at the current moment.
2. The method for purifying plant soil by combining cultivation according to claim 1, wherein determining the target image and the contrast image according to the time sequence relation of the frame difference image comprises:
and arranging all the frame difference images according to a time sequence to obtain a sequencing sequence, taking the last frame difference image in the sequencing sequence as a target image, and taking other frame difference images except the last frame difference image in the sequencing sequence as comparison images.
3. The method for purifying and treating farm sewage by combining cultivation according to claim 1, wherein the method for acquiring the angle corresponding to the movement direction comprises the following steps:
acquiring a hue value average value and a brightness value average value of all pixel points in each regional image in the target image and each contrast image;
optionally selecting a target area as a region to be detected, and selecting an area image closest to the centroid distance between the regions to be detected as a matching area in each contrast image;
In each contrast image, taking the centroid of a matching area corresponding to the area to be detected as a starting point, taking rays at the same position of the centroid of the area to be detected, and taking an included angle between the rays and the horizontal direction in the anticlockwise direction as a movement angle between each matching area and the area to be detected;
Taking the difference of the hue value mean value of the region to be detected and each matching region as a hue difference value; taking the difference of the brightness value mean value of the region to be detected and each matching region as a brightness difference value; obtaining a difference factor according to the hue difference value and the brightness difference value, wherein the hue difference value and the brightness difference value are positively correlated with the difference factor, and the difference factor takes the value of a normalized numerical value; taking the value of the difference factor subjected to negative correlation mapping as a direction weight;
Taking the product of the direction weight corresponding to each matching region and the motion angle corresponding to each matching region as a direction angle factor corresponding to each matching region; and taking the average value of the direction angle factors corresponding to all the matching areas as the angle corresponding to the movement direction of the area to be detected.
4. A method of purifying a plant soil in combination with a planting and breeding as set forth in claim 1, wherein the method of obtaining the smear level value includes:
Taking the area of the same position of each target area in the image frame at the current moment as an analysis area;
In the current moment image frame, taking each pixel point in the analysis area as a target point; for each target point, sequentially traversing pixel points in the opposite direction of the motion direction corresponding to the analysis area to which the target point belongs by taking each target point as a starting point, and taking the traversed pixel points each time as to-be-detected points corresponding to the target point;
Under each traversal, obtaining a smear factor corresponding to each target point under the current traversal number according to the difference of hue values between each target point and the corresponding point to be detected and the saturation value of the point to be detected; taking the average value of the smear factors of all target points under the current traversal times as the smear characteristic value of the current time image frame;
When the characteristic value of the smear of the image frame at the current time is smaller than the characteristic value of the smear of the image frame at the current time, the last time of traversal is used as the reference time, the traversed pixel point corresponding to each target point is used as the pixel point of the smear, and the characteristic value of the smear of the image frame at the current time is used as the value of the smear degree of the image frame at the current time.
5. The method for purifying plant sewage combined with planting according to claim 4, wherein the method for acquiring the smear factor comprises:
for any target point, carrying out negative correlation mapping on the difference of hue values between the target point and the point to be measured, and taking the normalized value as an adjustment weight; and taking the product of the value obtained after carrying out negative correlation mapping on the saturation value of the to-be-measured point and the corresponding adjustment weight as a smear factor corresponding to the target point under the current traversal times.
6. The method for purifying plant sewage combined with breeding according to claim 4, wherein the method for acquiring adaptive gaussian parameters comprises:
in the image frame at the current time, normalizing and inversely correlating the information entropy of angles corresponding to the motion directions of all analysis areas, and taking the product of the information entropy and the smear degree value of the image frame at the current time as an adjustment coefficient;
in the current time image frame, taking the standard deviation of the saturation of the smear pixel points corresponding to all target points in each analysis area as a fuzzy factor, and taking the average value of the fuzzy factors of all the analysis areas as a fuzzy characteristic value of the current time image frame;
and taking the product of the fuzzy characteristic value and the adjustment coefficient as the adaptive Gaussian parameter.
7. A method of combined farm waste decontamination as claimed in claim 4, wherein said enhanced image acquisition method comprises:
Acquiring the initial fuzzy kernel size according to the reference times;
the sum of elements in the initial fuzzy core is 1 through the self-adaptive Gaussian parameters, so that an updated fuzzy core is obtained, and the element distribution accords with Gaussian distribution formed by the Gaussian parameters;
and carrying out enhancement processing on the image of the current moment image frame in the RGB space by using a non-blind deconvolution method according to the updated fuzzy core to obtain an enhanced image.
8. The method for purifying and treating the soil in a cultivation farm according to claim 1, wherein the step of performing the soil purifying and monitoring according to the enhanced image to obtain the soil purifying result at the current moment comprises the following steps:
And taking the enhanced image as input of a pre-trained neural network, and outputting a dirt purifying result at the current moment.
9. The method of claim 7, wherein the initial blur kernel size is set to:
an odd number that is greater than the reference number and that is the smallest difference from the reference number is taken as the side length of the initial blur kernel.
10. A combined farm waste purification treatment system comprising a memory, a processor and a computer program stored in the memory and operable on the processor, wherein the processor when executing the computer program performs the steps of the method according to any of claims 1 to 9.
CN202410535393.XA 2024-04-30 2024-04-30 Cultivation-combined farm sewage purifying treatment method and system Active CN118154480B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410535393.XA CN118154480B (en) 2024-04-30 2024-04-30 Cultivation-combined farm sewage purifying treatment method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410535393.XA CN118154480B (en) 2024-04-30 2024-04-30 Cultivation-combined farm sewage purifying treatment method and system

Publications (2)

Publication Number Publication Date
CN118154480A true CN118154480A (en) 2024-06-07
CN118154480B CN118154480B (en) 2024-07-05

Family

ID=91298717

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410535393.XA Active CN118154480B (en) 2024-04-30 2024-04-30 Cultivation-combined farm sewage purifying treatment method and system

Country Status (1)

Country Link
CN (1) CN118154480B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112330544A (en) * 2019-08-05 2021-02-05 浙江宇视科技有限公司 Image smear processing method, device, equipment and medium
CN115546047A (en) * 2022-09-02 2022-12-30 合肥埃科光电科技股份有限公司 Video image noise reduction method, device and medium based on improved local filtering algorithm
US11763485B1 (en) * 2022-04-20 2023-09-19 Anhui University of Engineering Deep learning based robot target recognition and motion detection method, storage medium and apparatus

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112330544A (en) * 2019-08-05 2021-02-05 浙江宇视科技有限公司 Image smear processing method, device, equipment and medium
US11763485B1 (en) * 2022-04-20 2023-09-19 Anhui University of Engineering Deep learning based robot target recognition and motion detection method, storage medium and apparatus
CN115546047A (en) * 2022-09-02 2022-12-30 合肥埃科光电科技股份有限公司 Video image noise reduction method, device and medium based on improved local filtering algorithm

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张滕滕: "面向视频图像的局部运动去模糊算法研究", 《中国优秀硕士学位论文全文数据库》, no. 3, 15 March 2022 (2022-03-15), pages 138 - 1680 *

Also Published As

Publication number Publication date
CN118154480B (en) 2024-07-05

Similar Documents

Publication Publication Date Title
CN114724022B (en) Method, system and medium for detecting farmed fish shoal by fusing SKNet and YOLOv5
CN113077486B (en) Method and system for monitoring vegetation coverage rate in mountainous area
CN115908371B (en) Plant leaf disease and pest degree detection method based on optimized segmentation
CN110866872A (en) Pavement crack image preprocessing intelligent selection method and device and electronic equipment
CN113349111A (en) Dynamic feeding method, system and storage medium for aquaculture
CN114926407A (en) Steel surface defect detection system based on deep learning
CN111882555B (en) Deep learning-based netting detection method, device, equipment and storage medium
CN113344810A (en) Image enhancement method based on dynamic data distribution
CN115797225A (en) Unmanned ship acquisition image enhancement method for underwater topography measurement
Srinivas et al. Remote sensing image segmentation using OTSU algorithm
CN114187214A (en) Infrared and visible light image fusion system and method
CN114820401A (en) Method for enhancing marine backlight infrared image by combining histogram transformation and edge information
CN103177244B (en) Method for quickly detecting target organisms in underwater microscopic images
WO2020107308A1 (en) Low-light-level image rapid enhancement method and apparatus based on retinex
CN116823677B (en) Image enhancement method and device, storage medium and electronic equipment
CN116563768B (en) Intelligent detection method and system for microplastic pollutants
CN118154480B (en) Cultivation-combined farm sewage purifying treatment method and system
CN117830134A (en) Infrared image enhancement method and system based on mixed filtering decomposition and image fusion
CN113450272A (en) Image enhancement method based on sinusoidal curve change and application thereof
CN115601301B (en) Fish phenotype characteristic measurement method, system, electronic equipment and storage medium
CN116228574A (en) Gray image processing method and device
CN116524174A (en) Marine organism detection method and structure of multiscale attention-fused Faster RCNN
CN106934344B (en) quick pedestrian detection method based on neural network
CN113379611B (en) Image processing model generation method, processing method, storage medium and terminal
CN108133467B (en) Underwater image enhancement system and method based on particle calculation

Legal Events

Date Code Title Description
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant