CN113869103A - Object detection method, storage medium and system - Google Patents

Object detection method, storage medium and system Download PDF

Info

Publication number
CN113869103A
CN113869103A CN202110886785.7A CN202110886785A CN113869103A CN 113869103 A CN113869103 A CN 113869103A CN 202110886785 A CN202110886785 A CN 202110886785A CN 113869103 A CN113869103 A CN 113869103A
Authority
CN
China
Prior art keywords
weight
sub
target
image
target object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110886785.7A
Other languages
Chinese (zh)
Inventor
陈伟璇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Damo Institute Hangzhou Technology Co Ltd
Original Assignee
Alibaba Damo Institute Hangzhou Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Damo Institute Hangzhou Technology Co Ltd filed Critical Alibaba Damo Institute Hangzhou Technology Co Ltd
Priority to CN202110886785.7A priority Critical patent/CN113869103A/en
Publication of CN113869103A publication Critical patent/CN113869103A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01GWEIGHING
    • G01G19/00Weighing apparatus or methods adapted for special purposes not provided for in the preceding groups
    • G01G19/02Weighing apparatus or methods adapted for special purposes not provided for in the preceding groups for weighing wheeled or rolling bodies, e.g. vehicles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method, a storage medium and a system for detecting an object. Wherein, the method comprises the following steps: acquiring at least one image of a target object to be detected; identifying a sub-object in the target object in the image, wherein the sub-object is an object to be deducted from the target object; determining a first weight of the sub-object; the first weight is subtracted from the target weight of the target object to obtain a second weight. The invention solves the problem that the weight of the sub-objects in the object can not be deducted.

Description

Object detection method, storage medium and system
Technical Field
The present invention relates to the field of computers, and in particular, to a method, a storage medium, and a system for detecting an object.
Background
At present, when detecting an object and deducting sub-objects therein, for example, a method for detecting a vehicle in a scrap grading link of each large iron and steel plant usually adopts a manual operation mode, and manually estimates an approximate proportion of each grade by observing scrap planes of a plurality of layers of the vehicle to deduct the weight and the complexity. The method is greatly influenced by sampling density, angle blind areas and subjective intentions, and different employees have cognitive differences, so that judgment results are different. The same staff is influenced by factors such as mood and fatigue state, so that deviation and fluctuation can occur in judgment. In addition, the relationship between related persons also has direct influence on the manual judgment result.
Therefore, the method for detecting the sub-objects in the object in the related art is influenced by various factors such as the mood of people, the fatigue state, the cognitive difference, the interpersonal relationship and the like, so that the manual operation has great instability, and the technical problem that the sub-objects in the object cannot be deducted exists.
In view of the above technical problem that the weight of the sub-objects in the object cannot be deducted, no effective solution has been proposed at present.
Disclosure of Invention
The embodiment of the invention provides an object detection method, a storage medium and a system, which at least solve the technical problem that the weight of a sub-object in an object cannot be deducted.
According to an aspect of an embodiment of the present invention, there is provided a method of detecting an object. The method can comprise the following steps: acquiring at least one image of a target object to be detected; identifying a sub-object in the target object in the image, wherein the sub-object is an object to be deducted from the target object; determining a first weight of the sub-object; the first weight is subtracted from the target weight of the target object to obtain a second weight.
According to an aspect of the embodiment of the present invention, there is also provided another object detection method. The method can comprise the following steps: acquiring at least one vehicle image of a vehicle object to be detected; identifying a sub-object in the vehicle image, wherein the sub-object is an object to be deducted from the vehicle object; determining a first weight of the sub-object; the first weight is subtracted from the target weight of the vehicle object to obtain a second weight.
According to an aspect of the embodiment of the present invention, there is also provided another object detection method. The method can comprise the following steps: responding to an image input instruction acting on an operation interface, and acquiring at least one image of a target object to be detected; and responding to an object deduction instruction acting on the operation interface, and displaying a second weight of the target object on the operation interface, wherein the second weight is the weight obtained by deducting the first weight of a sub-object in the target object from the target weight of the target object, and the sub-object is an object to be deducted from the target object.
According to an aspect of the embodiment of the present invention, there is also provided an apparatus for detecting an object. The apparatus may include: a first acquisition unit for acquiring at least one image of a target object to be detected; the first identification unit is used for identifying a sub-object in the target object in the image, wherein the sub-object is an object to be deducted from the target object; a first determination unit for determining a first weight of the child object; the first deducting unit is used for deducting the first weight from the target weight of the target object to obtain a second weight.
According to an aspect of the embodiment of the present invention, there is also provided another object detection apparatus. The apparatus may include: a second acquisition unit for acquiring at least one vehicle image of a vehicle object to be detected; the second identification unit is used for identifying a sub-object in the vehicle image, wherein the sub-object is an object to be deducted from the vehicle object; a second determination unit for determining a first weight of the sub-object; and the second deducting unit is used for deducting the first weight from the target weight of the vehicle object to obtain a second weight.
According to an aspect of the embodiment of the present invention, there is also provided another object detection apparatus. The apparatus may include: the third acquisition unit is used for responding to an image input instruction acting on the operation interface and acquiring at least one image of the target object to be detected; and the display unit is used for responding to the object deduction instruction acted on the operation interface and displaying a second weight of the target object on the operation interface, wherein the second weight is the weight obtained by deducting the first weight of a sub-object in the target object from the target weight of the target object, and the sub-object is an object to be deducted from the target object.
According to an aspect of the embodiment of the present invention, there is also provided another object detection apparatus. The apparatus may include: the fourth obtaining unit is used for obtaining at least one image of the target object to be detected by calling the first interface, wherein the first interface comprises a first parameter, and a parameter value of the first parameter is the image; the third identification unit is used for identifying a sub-object in the target object in the image, wherein the sub-object is an object to be deducted from the target object; a third determining unit for determining a first weight of the sub-object; the third deduction unit is used for deducting the first weight from the target weight of the target object to obtain a second weight; and the output unit is used for outputting the second weight by calling the second interface, wherein the second interface comprises a second parameter, and the parameter value of the second parameter is the second weight.
The apparatus may include: acquiring at least one image of a target object to be detected; identifying a sub-object in the target object in the image, wherein the sub-object is an object to be deducted from the target object; determining a first weight of the sub-object; the first weight is subtracted from the target weight of the target object to obtain a second weight. That is to say, the present application identifies the image of the target object, identifies the sub-objects of the target object, and subtracts the weight of the sub-objects from the target weight of the target object to obtain the final second weight, thereby achieving the purpose of deducting the weight of the target object, solving the technical problem that the weight of the sub-objects in the object cannot be subtracted, and achieving the technical effect of subtracting the weight of the sub-objects in the object.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
fig. 1 is a block diagram of a hardware configuration of a computer terminal (or mobile device) of a method for detecting an object according to an embodiment of the present invention;
FIG. 2 is a flow chart of a method of detecting an object according to an embodiment of the invention;
FIG. 3 is a flow chart of another method of object detection according to an embodiment of the present invention;
FIG. 4 is a flow chart of another method of object detection according to an embodiment of the present invention;
fig. 5 is a flowchart of a detection method of an object according to the present embodiment;
FIG. 6 is a schematic diagram of a scrap steel grading, weight deducting and impurity deducting integrated body according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of a single layer picture processing according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of a logistic regression, according to an embodiment of the present invention;
FIG. 9 is a schematic diagram of a block diagram of a Yolact algorithm according to an embodiment of the invention;
FIG. 10 is a schematic diagram of a network structure of a random forest according to an embodiment of the invention;
FIG. 11 is a schematic view of an apparatus for detecting an object according to an embodiment of the present invention;
FIG. 12 is a schematic view of another object detection apparatus according to an embodiment of the present invention;
FIG. 13 is a schematic view of another object detection arrangement according to an embodiment of the present invention;
FIG. 14 is a schematic view of another object detection apparatus according to an embodiment of the present invention;
fig. 15 is a block diagram of a computer terminal according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
First, some terms or terms appearing in the description of the embodiments of the present application are applicable to the following explanations:
weighbridges, also known as truck scales, are large scales placed on the ground, usually used to weigh the tons of a truck, and are the main weighing devices used by factories, mines, and merchants for measuring bulk goods;
weighing the truck by driving the truck to a weighbridge;
deducting weight, wherein the grade of the scrap steel goods does not reach the standard, or rejected goods exist, and the scrap steel goods need to be identified and deducted from the weight of the goods;
impurities are deducted, and non-steel materials such as cement, sand and stone exist in the scrap steel cargos and need to be identified and deducted from the weight of the cargos;
judging grades, namely dividing the scrap steel into different grades according to the thickness, and identifying the grades of the scrap steel reported by suppliers by a system to determine whether the grades are consistent;
foreign matters and rejected articles exist in scrap steel cargos, influence steel smelting and need to be identified;
judging the grade of the single layer, namely judging the grade of the steel scrap goods on the uppermost layer of the car hopper;
judging the grade of the whole vehicle, and comprehensively judging the grade of the whole vehicle scrap steel goods according to the grade of the scrap steel goods on each layer of the hopper;
weighing the weight, namely weighing the vehicle for the first time by the wagon balance and the total weight of the scrap steel;
the remaining weight, the weight-the weight of the deduction of the weight of the impurities;
and (4) weighing the steel scrap net weight and the vehicle weight through the wagon balance for the second time, wherein the weight of the steel scrap after the weight of the vehicle is removed is the residual weight-the weight of the vehicle.
Example 1
There is also provided, in accordance with an embodiment of the present invention, an embodiment of a method for detecting an object, it should be noted that the steps illustrated in the flowchart of the accompanying drawings may be performed in a computer system such as a set of computer-executable instructions, and that, although a logical order is illustrated in the flowchart, in some cases, the steps illustrated or described may be performed in an order different than here.
The method provided by the first embodiment of the present application may be executed in a mobile terminal, a computer terminal, or a similar computing device. Fig. 1 is a block diagram of a hardware configuration of a computer terminal (or mobile device) of a method for detecting an object according to an embodiment of the present invention. As shown in fig. 1, the computer terminal 10 (or mobile device 10) may include one or more (shown as 102a, 102b, … …, 102 n) processors 102 (the processors 102 may include, but are not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA, etc.), a memory 104 for storing data, and a transmission module 106 for communication functions. Besides, the method can also comprise the following steps: a display, an input/output interface (I/O interface), a Universal Serial Bus (USB) port (which may be included as one of the ports of the I/O interface), a network interface, a power source, and/or a camera. It will be understood by those skilled in the art that the structure shown in fig. 1 is only an illustration and is not intended to limit the structure of the electronic device. For example, the computer terminal 10 may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
It should be noted that the one or more processors 102 and/or other data processing circuitry described above may be referred to generally herein as "data processing circuitry". The data processing circuitry may be embodied in whole or in part in software, hardware, firmware, or any combination thereof. Further, the data processing circuit may be a single stand-alone processing module, or incorporated in whole or in part into any of the other elements in the computer terminal 10 (or mobile device). As referred to in the embodiments of the application, the data processing circuit acts as a processor control (e.g. selection of a variable resistance termination path connected to the interface).
The memory 104 may be used to store software programs and modules of application software, such as program instructions/data storage devices corresponding to the object detection method in the embodiment of the present invention, and the processor 102 executes various functional applications and data processing by running the software programs and modules stored in the memory 104, that is, implements the object detection method described above. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory located remotely from the processor 102, which may be connected to the computer terminal 10 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used for receiving or transmitting data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the computer terminal 10. In one example, the transmission device 106 includes a Network adapter (NIC), which can be connected to other Network devices through a base station so as to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is used for communicating with the internet in a wireless manner.
The display may be, for example, a touch screen type Liquid Crystal Display (LCD) that may enable a user to interact with a user interface of the computer terminal 10 (or mobile device).
It should be noted here that in some alternative embodiments, the computer device (or mobile device) shown in fig. 1 described above may include hardware elements (including circuitry), software elements (including computer code stored on a computer-readable medium), or a combination of both hardware and software elements. It should be noted that fig. 1 is only one example of a particular specific example and is intended to illustrate the types of components that may be present in the computer device (or mobile device) described above.
In the operating environment shown in fig. 1, the present application provides a method of detecting an object as shown in fig. 2. It should be noted that the object detection method of the embodiment may be executed by the mobile terminal of the embodiment shown in fig. 1.
Fig. 2 is a flowchart of a method for detecting an object according to an embodiment of the present invention. As shown in fig. 2, the method may include the steps of:
step S202, at least one image of the target object to be detected is acquired.
In the technical solution provided by step S202 of the present invention, the target object may be an object to be detected as a core function index in the manufacturing industry, for example, a vehicle (scrap truck) loaded with scrap steel. At least one image of a target object is acquired. Optionally, in this embodiment, a plurality of image capturing apparatuses may be installed around the target object, and the target object may be photographed globally and locally according to a plurality of different angles and different magnifications to obtain at least one image. For example, the target object is a scrap steel truck, and the scrap steel truck can be photographed by a plurality of camera devices to obtain a series of pictures of the scrap steel truck, a hopper area, a cargo area, and the like, which is not limited specifically here.
Optionally, the image of this embodiment may be an image of each sampling layer of the target object during the unloading process, so that at least one image may also be referred to as at least one layer of image.
Optionally, after at least one image of the target object to be detected is acquired, the image may be preprocessed, where the preprocessing includes, but is not limited to, performing image equalization processing on the image, performing contrast stretching on the image, performing edge extraction on the image, performing matching rectification on the image, performing key region clipping on the image, and the like, and here, no specific limitation is made, so that the image meets the requirement of effectively identifying the sub-object of the target object.
Optionally, this embodiment may adopt an unmanned aerial vehicle mobile tracking mode, a fixed camera dynamic adjustment mode, or an imaging environment such as dynamic adjustment ambient light and supplementary lighting to acquire the above-mentioned at least one image.
Step S204, identifying a sub-object in the target object in the image, wherein the sub-object is an object to be deducted from the target object.
In the technical solution provided in step S204 of the present invention, after at least one image of the target object to be detected is obtained, a sub-object in the target object is identified in the image, where the sub-object is an object that needs to be eliminated in the target object.
In this embodiment, the image may be detected and identified, and a sub-object of the target object may be identified from the image, where the sub-object may include a first sub-object, which is made of a target material that does not reach a target level, where the target level is not reached, and the target level may be a scrap level, for example, the first sub-object may be a scrap cargo that does not reach the target level, or a rejected object, and needs to be identified from the target object and deducted, that is, deducted; optionally, the sub-objects may further include a second sub-object, which may be made of a material other than the target material, for example, the target material is scrap steel, and the second sub-object may be non-ferrous materials such as cement and gravel, and may also be other foreign materials affecting steel smelting, and need to be identified and deducted from the target object, that is, deducted impurities. The deduction and deduction are one of core indexes for detecting the target object, the target grade can be used for judging the target object, and the judging grade can comprise single-layer judging grade and whole vehicle judging grade.
Alternatively, the embodiment may identify each image from which sub-objects of the target object are identified. The sub-objects corresponding to each image are taken together as the sub-objects in the target object.
In step S206, a first weight of the sub-object is determined.
In the technical solution provided by step S206 of the present invention, after the sub-object in the target object is identified in the image, the first weight of the sub-object is determined.
In this embodiment, the sub-object is an object that needs to be subtracted from the target object, and the determining of the first weight of the sub-object may be inputting the relevant parameter of the sub-object as an independent variable into the target model, and processing the relevant parameter of the sub-object through the target model to obtain the first weight of the sub-object, where the first weight is a dependent variable of the target model, and the relevant parameter of the sub-object may be an area ratio of the sub-object to the area of the target object.
Step S208, deducting the first weight from the target weight of the target object to obtain a second weight.
In the technical solution provided in step S208 of the present invention, after determining the first weight of the sub-object, the first weight may be subtracted from the target weight of the target object to obtain the second weight.
In this embodiment, the weight of the target object is the target weight, and the weight of the sub-object is the first weight, which needs to be subtracted from the target object, that is, the first weight is subtracted from the target object, so as to obtain the second weight, thereby achieving deduction and deduction of the whole target object (for example, a whole vehicle), ensuring rationality and interpretability of the whole detection scheme, and avoiding that the detection and identification of the sub-object of the target object cannot be achieved in the related art, and the calculation of deduction and deduction of the target object cannot be achieved.
Through the steps S202 to S208 described above, at least one image of the target object to be detected is obtained; identifying a sub-object in the target object in the image, wherein the sub-object is an object to be deducted from the target object; determining a first weight of the sub-object; the first weight is subtracted from the target weight of the target object to obtain a second weight. That is to say, the present application identifies the image of the target object, identifies the sub-objects of the target object, and subtracts the weight of the sub-objects from the target weight of the target object to obtain the final second weight, thereby achieving the purpose of deducting the weight of the target object, solving the technical problem that the weight of the sub-objects in the object cannot be subtracted, and achieving the technical effect of subtracting the weight of the sub-objects in the object.
The above-described method of this embodiment is further described below.
As an alternative implementation, step S206, determining a first weight of the sub-object, includes: acquiring a first ratio of the area of the sub-object to the area of the target object; a first weight is determined based on the first fraction.
In this embodiment, in achieving the determination of one weight of the sub-object, it may be that for each image, a first ratio of an area of the sub-object in each image to an area of the target object is acquired. Optionally, the sub-objects in each image include a first sub-object whose material is a target material that does not reach the target level, and the material is a second sub-object whose material is other than the target material, where the first sub-object may obtain a first ratio of an area of the first sub-object in each image in an area of the target object, for example, the first ratio is an area ratio of non-standard scrap steel; a first ratio of the area of the second sub-object in each image to the area of the target object is obtained, for example, the first ratio is a ratio of impurities.
For the plurality of images, the first ratio of the entire target object may be obtained by a plurality of first ratios corresponding to the plurality of images, for example, the ratio of the unqualified scrap area of the entire target object is obtained by a plurality of ratios of the unqualified scrap area, which may be represented by kouzhong _ ratio; the impurity proportion of the entire target object may also be obtained by a plurality of impurity area proportions, which may be expressed by kouza _ ratio.
After obtaining the first proportion of the whole target object, the embodiment may determine the first weight of the sub-objects in the whole target object based on the first proportion, and optionally, the first weight may be calculated based on the non-standard scrap area proportion and the impurity proportion.
As an alternative embodiment, determining the first weight based on the first proportion includes: determining a first weight based on the first proportion and at least one of the following target information: the hybrid coefficient of the target material, the net weight of the target object and the second proportion of the target type object in the target object, wherein the hybrid coefficient is used for representing the degree that the target object is mixed with the target material, and the net weight is the difference of two weighing results of the target object.
In this embodiment, the confounding coefficient of the target material of the whole target object, for example, the target material is scrap steel, can be estimated according to the distribution of different levels of the whole target object, and the confounding coefficient is used to indicate the degree of the target material mixed with the target object, and has a value between 0 and 1, which can be indicated by mix _ ratio; the embodiment may further obtain a net weight of the target object, where the net weight is a difference between two weighing times of the target object and may be represented by netweight, for example, the target object is a vehicle, and may be a target object that is weighed twice, and a weight of the target material obtained by removing a weight of the target object itself is a remaining weight — a weight of the target object itself, where the remaining weight is a difference between the weighed weight and a first weight of the sub-object, and the weighed weight is a target weight of the target object itself and a total weight of the target material after the target object is weighed for the first weighing time; the embodiment may further identify a target type object from the target objects, for example, the second percentage of a specific device such as a super-long device, a closed container, etc. which identifies a special penalty may be represented by ex _ kouzhong _ ratio. The embodiment may determine the first weight of the sub-object based on the first and second ratios and the at least one target information, for example, the first weight of the sub-object based on the non-standard scrap area ratio of the entire target object, the impurity ratio of the entire target object, and the at least one target information.
As an alternative embodiment, determining the first weight based on the first proportion and the target information includes: and performing regression processing on the first proportion and the target information to obtain a first weight.
In this embodiment, when determining the first weight based on the first proportion and the target information is implemented, regression processing may be performed on the first proportion and the target information, for example, regression processing may be performed on at least one target information of an unqualified scrap area proportion of the entire target object, an impurity proportion of the entire target object, and a blending coefficient of the target material, a net weight of the target object, and a second proportion of a target type object in the target object, so as to obtain the first weight. Alternatively, the embodiment may use a regression algorithm model (regression model) to predict, and perform regression processing by using two or more single factors or combinations of target information of at least one of the substandard scrap area ratio of the entire target object, the impurity ratio of the entire target object, and the confounding coefficient of the target material, the net weight of the target object, and the second ratio of the target type object in the target object as input arguments of the regression algorithm model, thereby obtaining the first weight of the sub-objects of the entire target object.
Optionally, for the regression algorithm model, a machine learning regression algorithm may be selected, including but not limited to logistic regression, Random forest (Random forest), Gradient Boosting Decision Tree (GDBT), open source machine learning (xgboot), Gradient Boosting frame (lightGBM) based on a Decision Tree algorithm, and the like, and a deep neural network regression model may also be selected, where the Random forest is a classifier that trains and predicts a sample by using multiple trees. Alternatively, the embodiment may obtain the regression algorithm model by training data samples, for example, the target object is a vehicle, and the regression algorithm model may be obtained by performing regression learning using data of historical unloading penalty records.
As an alternative embodiment, the number of the at least one image is plural, the determining the first weight based on the first proportion includes: carrying out weighted summation on a plurality of first ratios corresponding to a plurality of images to obtain a summation result; the first weight is determined based on the result of the summation.
In this implementation, for the plurality of images, the plurality of first ratios corresponding to the plurality of images may be weighted and summed to obtain a summation result, where the summation result is the first ratio corresponding to the entire target object, for example, the plurality of non-standard steel scrap area ratios are weighted and summed, and the obtained summation result may be the non-standard steel scrap area ratio of the entire target object; and the weighted summation can be carried out on the area ratios of the plurality of impurities, and the obtained summation result can be the impurity ratio of the whole target object. After obtaining the summation result, the first weight may be determined based on the summation result, for example, a regression process is performed on at least one target information of an unqualified scrap area ratio of the entire target object, an impurity ratio of the entire target object, and a confounding coefficient of the target material, a net weight of the target object, and a second ratio of a target type object in the target object, so as to obtain the first weight.
As an optional implementation, the method further comprises: identifying a first area where the child object is located in the image; carrying out segmentation processing on the first area to obtain a segmentation result; the area of the sub-object is determined based on the segmentation result.
In this embodiment, the sub-object may be detected and segmented, and optionally, the embodiment may identify a first region in which the sub-object is located in the image, and may detect various second sub-objects in which materials other than the target material are in the image of the target object, and perform segmentation processing on the first region in which the second sub-object is located to obtain a segmentation result, and based on the segmentation result, identify the weight and area of each second sub-object, for example, implement detection and segmentation of impurities.
As an alternative embodiment, the number of the at least one image is plural, and the step S206 of determining the first weight of the sub-object includes: carrying out duplicate removal processing on a plurality of sub-objects corresponding to a plurality of images to obtain duplicate removal results; a first weight is determined based on the deduplication result.
In this embodiment, some sub-objects in the plurality of sub-objects corresponding to the plurality of images may be the same, so that the same sub-object may be sampled multiple times, for example, the same first sub-object or the same second sub-object may be sampled multiple times in different layers, this embodiment may perform deduplication processing on the same sub-object in the plurality of sub-objects, for example, perform deduplication processing in a manner of intersection, union, mean, and the like to obtain a deduplication result, may calculate a first percentage corresponding to the entire target object, for example, an unqualified steel scrap percentage and an impurity percentage corresponding to the entire target object, based on the deduplication result, and further perform regression processing on at least one target information of the unqualified steel scrap area percentage of the entire target object, the impurity percentage of the entire target object, and the confounding coefficient of the target material, the net weight of the target object, and a second percentage of the target type object in the target object, thereby obtaining a first weight.
As an alternative example, the estimation of the first weight in this embodiment may be performed by calculating the net weight and the purity of each target object according to the calculated different levels of occupation distribution and the total penalty rule of the owner, or may be performed by matching an object most similar to the image of the target object in the history image in an image comparison manner and estimating the first weight with reference to the object, for example, for the estimation of the withholding penalty, the estimation of the withholding penalty may be performed by calculating the different levels of occupation distribution and the total net weight and the purity of the entire vehicle according to the withholding penalty rule of the owner, or may be performed by calculating the most similar number of jobs in the history data and estimating the weight corresponding to the withholding penalty by referring to the penalty result corresponding to the history number of jobs in the image comparison manner.
As an optional implementation, the method further comprises: extracting a second area from the image, wherein the second area is an area covered by target materials in the target object; performing segmentation processing on the second area according to the grade of the target material to obtain a plurality of sub-areas; and determining a target grade based on the grade corresponding to each sub-area.
In this embodiment, a second region, which is a region covered by the target material in the target object, for example, a region of the car hopper scrap, may be extracted from each image of the target object, and interference of other regions except the second region may be excluded, which may be based on detection of the second sub-object and determination of the grade of the target object.
Optionally, in this embodiment, the second region of each image may be segmented according to different grades of the target material, for example, each layer of image may be segmented by different grades of scrap steel, and the sub-regions at each grade are calculated, and the target grade of the entire target object may be determined based on the grade corresponding to each sub-region, where the target grade is one of the core indexes of the target object. The method can realize the detection and identification of the scrap steel of different grades of the whole vehicle scrap steel.
Optionally, the embodiment may extract the second region from the image by using a deep learning method, for example, by using a deep learning image instance segmentation algorithm (for example, Mask-RCNN, SSD, Yolact, or the like), or by using a classified deep neural network algorithm such as resnet50, densenet, or the like, to extract the second region from the image, and then sequentially perform segmentation processing on the first region to determine the area of the sub-object (for example, debris detection segmentation), and determine the target level of the entire target object, for example, the steel scrap level of the entire vehicle.
Optionally, the above Yolact algorithm is to add a mask branch on the basis of the existing one-stage detection network, and divide the whole task into two parallel subtasks. For example, a sample mask (prototype mask) branch may use a network structure of a Full Convolutional Network (FCN) to generate the prototype mask, and this process may not involve a single instance (the single instance is obtained after the corp on the detection result); and the target detects branches, the branches predict the coefficients of the mask (mask) for each anchor (anchor) so as to obtain the positions of the examples in the image, then the mask branches and the prototype mask are subjected to linear operation, and the results of the two branches are combined so as to obtain the second region.
The above Yolact algorithm of this embodiment may have the following advantages: the speed is high; the segmentation mask is high in quality, lossless characteristic information can be obtained due to the fact that a pooling (Pooling) operation in a two-stage method is not used, and performance is more excellent under a segmentation scene of a large target; the module generalization, prototype (prototype) generation and mask coefficient can be added to the existing detection network, thereby the used scenes are more extensive.
As an optional implementation, determining the target level based on the level corresponding to each sub-region includes: a target level is determined based on the area of each sub-region and the corresponding level.
The embodiment may acquire the area of each sub-region for each image, for example, the area of the sub-region corresponding to each grade of scrap steel, and then estimate the target grade of the whole target object based on the area of the sub-region and the corresponding grade.
As an optional embodiment, the number of the at least one image is plural, and the determining the target grade based on the area of each sub-region and the corresponding grade includes: determining a corresponding sub-grade of each image based on the area and the corresponding grade of each sub-region in each image to obtain a plurality of sub-grades; and performing fusion processing on the plurality of sub-grades and the corresponding weights to obtain a target grade.
In this embodiment, for each image, the sub-grade under each image may be estimated based on the area of the sub-region and the corresponding grade, and the sub-grade under each image may be estimated by the area size of the sub-region and the corresponding grade vote, where the sub-grade may be the scrap grade of each layer, thereby implementing single-layer scrap grade determination. Optionally, the weight corresponding to the sub-level in each image is obtained, and then the sub-level in each image and the corresponding weight are subjected to fusion processing to obtain the target level of the whole target object, for example, the steel scrap level of the whole vehicle is judged.
Optionally, the embodiment may perform fusion by using a surface-to-bottom linear attenuation weighting method, where each layer includes A, B, C, D four sub-classes, the ratio of A, B, C, D four sub-classes may be RA, RB, RC, and RD, respectively, the i-th layer weight calculation formula in the 1 st to n-th layers may be Ri ═ n-i/n, and the target class of the entire target object may be Max (RA × Ri), sum (RB × Ri), sum (RC × Ri), and sum (RD × Ri)), where i ═ 1 to n, and n may be 10 to 30 according to actual situations.
As an alternative example, the target level of the target object may also be implemented by using a graph search method, for example, a basic level database is constructed, the image of the target object is searched in the basic level database, the level of the image most similar to the image of the target object can be searched, and the level is determined as the target level.
The embodiment of the invention also provides another object detection method.
Fig. 3 is a flowchart of another object detection method according to an embodiment of the present invention. As shown in fig. 3, the method may include:
step S302, at least one vehicle image of a vehicle object to be detected is acquired.
In the technical solution provided by step S302 of the present invention, the vehicle object may be a vehicle to be detected as a core index in the manufacturing industry, and the vehicle object is loaded with scrap steel, such as a scrap steel truck. At least one vehicle image of a vehicle object is acquired. Optionally, the embodiment may install a plurality of image capturing devices around the vehicle object, and may capture images of the vehicle object globally and locally according to a plurality of different angles and different magnifications to obtain at least one vehicle image. For example, the vehicle object is a truck, and the truck object can be photographed by a plurality of camera devices to obtain a series of pictures of scrap steel cars, a car hopper area, a cargo area, and the like, which is not limited herein.
Optionally, after at least one vehicle image of the vehicle object to be detected is acquired, the vehicle image may be preprocessed, where the preprocessing includes, but is not limited to, performing image equalization processing on the vehicle image, performing contrast stretching on the vehicle image, performing edge extraction on the vehicle image, performing matching rectification on the vehicle image, performing key region clipping on the vehicle image, and the like, and no specific limitation is made here, so that the vehicle image meets the requirement of effectively identifying the sub-object of the vehicle object.
In step S304, a sub-object in the vehicle object is identified in the vehicle image, where the sub-object is an object to be deducted from the vehicle object.
In the technical solution provided in step S304 of the present invention, the vehicle image may be detected and identified, and the sub-object of the vehicle object is identified from the detected vehicle image, where the sub-object may include a first sub-object, for example, the first sub-object may be a non-standard steel scrap cargo, or a rejected object, and the sub-object may further include a second sub-object, for example, the second sub-object may be a non-steel material such as cement, gravel, and the like, and needs to be identified and deducted from the target object, that is, deducted impurities. The deduction of the weight and the deduction of the impurities is one of core indexes for detecting the target object.
Alternatively, the embodiment may identify each vehicle image from which sub-objects of the vehicle object are identified. The sub-objects corresponding to each vehicle image are taken together as a sub-object in the vehicle object.
In step S306, a first weight of the sub-object is determined.
In the technical solution provided by step S306 of the present invention, the sub-object is an object that needs to be deducted from the vehicle object, and the first weight of the sub-object may be determined in this embodiment, where the related parameter of the sub-object is input to the target model as an independent variable, and the related parameter of the sub-object is processed by the target model, so as to obtain the first weight of the sub-object, where the first weight is a dependent variable of the target model, and the related parameter of the sub-object may be an area ratio of the sub-object to an area of the vehicle object.
Step S308, deduct the first weight from the target weight of the vehicle object to obtain a second weight.
In the technical solution provided in step S308 of the present invention, the weight of the vehicle object is the target weight, and the weight of the sub-object is the first weight, which needs to be deducted from the vehicle object, that is, the first weight is deducted from the vehicle object, so as to obtain the second weight, thereby realizing deduction of the entire vehicle object, ensuring rationality and interpretability of the entire detection scheme, and avoiding that the detection and identification of the sub-object of the vehicle object cannot be realized in the related art, and the calculation of deduction of the vehicle object cannot be realized.
The method can be applied to a scrap grading link, can be based on a deep learning image detection and segmentation algorithm, can be used for detecting and segmenting scrap steel and sundries of different levels for each layer of scrap steel in the process of unloading the scrap steel to obtain the proportion of steel and the type and the content of the sundries of each level in the finished automobile steel, further combines information such as an industry penalty rule and the net weight of the scrap steel, calculates the weight of the penalty and the sundries through big data statistics and regression calculation, and ensures the reasonability and the interpretability of the whole estimation scheme.
The embodiment of the invention also provides another object detection method from the man-machine interaction side.
Fig. 4 is a flowchart of another object detection method according to an embodiment of the present invention. As shown in fig. 4, the method may include:
step S402, responding to an image input instruction acting on the operation interface, and acquiring at least one image of the target object to be detected.
In the technical solution provided by step S402 of the present invention, a user may trigger an image input instruction on an operation interface, where the image input instruction is used to obtain at least one image of a target object to be detected, and the target object may be an object to be detected as a core index in the manufacturing industry, for example, a vehicle loaded with scrap steel. At least one image of a target object is acquired. Optionally, in this embodiment, a plurality of image capturing apparatuses may be installed around the target object, and the target object may be photographed globally and locally according to a plurality of different angles and different magnifications to obtain at least one image.
Step S404, responding to the object deduction instruction acting on the operation interface, displaying a second weight of the target object on the operation interface, where the second weight is a weight obtained by deducting the first weight of a sub-object in the target object from the target weight of the target object, and the sub-object is an object to be deducted from the target object.
In the technical solution provided in step S404 of the present invention, the second weight is a weight obtained by subtracting the first weight of the sub-object in the target object from the target weight of the target object, and the sub-object is an object to be subtracted from the target object.
In this embodiment, the user may trigger an object deduction instruction on the graphical user interface, the object deduction instruction being used to display the second weight of the target object on the operation interface. Optionally, in response to the object deduction instruction, the image may be detected and identified, and a sub-object of the target object may be identified from the image, where the sub-object may include a first sub-object, for example, the first sub-object may be a non-standard scrap cargo, or a rejected object, which needs to be identified and deducted from the target object, that is, deducted weight; optionally, the sub-object may also include a second sub-object, which may be non-ferrous material such as cement, sand, etc., that needs to be identified and subtracted from the target object, i.e., deducted. Since the sub-object is an object to be subtracted from the target object, the first weight of the sub-object is determined in this embodiment, which may be obtained by inputting the relevant parameters of the sub-object as arguments to the target model and processing the relevant parameters of the sub-object through the target model.
The embodiment can deduct the first weight from the target object to obtain the second weight, thereby realizing the deduction and impurity deduction of the whole target object, ensuring the rationality and the interpretability of the whole detection scheme, and avoiding the problems that the detection and identification of the sub-object of the target object cannot be realized and the calculation of the deduction and impurity deduction of the target object cannot be realized in the related technology.
The embodiment of the invention also provides another object detection method from the server side.
Fig. 5 is a flowchart of a detection method of an object according to the present embodiment. As shown in fig. 5, the method may include the steps of:
step S502, at least one image of a target object to be detected is obtained by calling a first interface, wherein the first interface comprises a first parameter, and a parameter value of the first parameter is the image.
In the technical solution provided by step S502 of the present invention, the first interface may be an interface for performing data interaction between the server and the client. The client can transmit at least one image to the first interface as a first parameter of the first interface, so as to achieve the purpose of uploading at least one image to the server.
Step S504, a sub-object in the target object is identified in the image, wherein the sub-object is an object to be deducted from the target object.
In step S506, a first weight of the sub-object is determined.
Step S508, subtracting the first weight from the target weight of the target object to obtain a second weight.
Step S510, outputting a second weight by calling a second interface, where the second interface includes a second parameter, and a parameter value of the second parameter is the second weight.
In the technical solution provided in step S510 of the present invention, the second interface may be an interface for performing data interaction between the server and the client, and the server may transmit the second weight to the second interface as a parameter of the second interface, so as to achieve the purpose of issuing the second weight to the client.
In this embodiment, the sub-objects of the target object are identified by identifying the image of the target object, and the weight of the sub-objects is deducted from the target weight of the target object, so that the purpose of deducting the weight of the target object is achieved, the technical problem that the weight of the sub-objects in the object cannot be deducted is solved, and the technical effect of deducting the weight of the sub-objects in the object is achieved.
Example 2
The following further describes a preferred embodiment of the above method of this embodiment, specifically, a vehicle with a target object as a steel scrap load will be exemplified.
In the steel scrap grading link of each large steel plant, a manual operation mode is usually adopted. Different employees have cognitive differences, so that judgment results are different. The same employee is influenced by factors such as mood and fatigue state, and deviation and fluctuation can occur in judgment. The relationship between the driver and the staff who draw the scrap steel also has direct influence on the manual judgment result. Therefore, the manual operation is influenced by various factors such as mood, fatigue state, cognitive difference, interpersonal relationship and the like, and the manual operation has great instability, so that the grading cost of the scrap steel is directly influenced, and certain economic loss is caused to steel plants.
In the related technology, the scrap steel grade of the whole vehicle can be judged by using an image recognition technology, the scrap steel grade of the whole vehicle can be automatically recognized, and partial problems in the scrap steel grading link can be solved. However, the method cannot realize the detection and identification of the scrap steel of different grades of the whole vehicle scrap steel and the detection and identification of non-steel impurities, so that the calculation of the weight deduction and the impurity deduction in the grading process of the scrap steel cannot be realized, and the problem of the weight deduction and the impurity deduction in the grading process of the scrap steel still cannot be solved.
In another related technology, feature extraction can be performed on an image, and convolutional neural network training can be performed on the extracted image features to obtain a neural network model with grade division.
Both of the above two schemes require a certain requirement for vehicle parking, which makes the vehicle parking flexibility limited to a certain extent, and when the parking is inappropriate, the grading error or failure is easy to occur. Similarly, the method can not realize the detection and identification of the scrap steel of different grades of the whole vehicle scrap steel and the detection and identification of non-steel impurities, so that the calculation of the weight deduction and impurity deduction in the grading process of the scrap steel can not be realized, and the weight deduction and impurity deduction is one of core indexes of the grading of the scrap steel.
The embodiment can solve two steel scrap grading cores of steel scrap grading and weight deduction and impurity deduction.
This embodiment can be installed a plurality of camera around waiting to detect the scrap steel freight train, can be according to a plurality of different angles, and different magnification carries out global and local shooing to the scrap steel freight train to obtain the picture in a series of scrap steel car, car hopper area, goods district, can carry out the preliminary treatment to the picture, the preliminary treatment mode can include but not limited to, image balance, contrast stretch, edge extraction, match and correct and key district and tailor etc..
FIG. 6 is a schematic diagram of a scrap steel grading, weight deducting and impurity deducting whole according to an embodiment of the present invention. As shown in fig. 6, the truck is parked and starts to unload, and during unloading, images of layer 1 and layer 2 … … can be obtained by shooting, and algorithm recognition is performed on each layer of image to obtain a layer 1 result and a layer 2 result … … and a layer N result. Further, scrap classification and scrap deduction were performed based on the results of layer 1 and layer 2, and layer N, of … ….
Fig. 7 is a schematic diagram of a single layer picture processing according to the present embodiment. As shown in fig. 7, for the image of each layer, a deep learning image example segmentation algorithm (such as Mask-RCNN, yoract, and other algorithms) is used to sequentially perform the steps of extraction of a scrap region ROI, scrap grade determination, substandard scrap detection, scrap impurity detection, and deduction of impurities.
The above-described method of this embodiment is further described below.
In this embodiment, the ROI extraction of the scrap region may be to extract a target scrap region on the truck bed, and exclude interference other than the target scrap region, so as to provide a basis for subsequent debris detection segmentation and grade determination algorithms.
In this embodiment, the above-mentioned scrap grade determination may be performed by dividing the car hopper scrap region into different grades of scrap, calculating the area of each grade of scrap, and estimating the scrap grade of the layer by voting the area size. For the grade determination of the finished automobile scrap steel, a surface layer to bottom layer linear attenuation weighting mode is adopted for fusion, for example, if each layer contains A, B, C, D four types of scrap steel grades, wherein the proportions are RA, RB, RC and RD respectively, then the layer-i weight calculation formula in the layers 1 to n is Ri ═ n/n, then the finished automobile grade is converted into:
max (RA × Ri), sum (RB × Ri), sum (RC × Ri), and sum (RD × Ri)), where i is 1 to n, and n may be 10 to 30, depending on the actual situation.
In this embodiment, for the scrap impurity detection division, various impurities in the hopper image are detected, and the division of the impurity region is realized, so that the identification of each impurity type and area is realized.
In this embodiment, for the above deduction and deduction estimation, the basic grade of the entire vehicle may be determined by calculating according to the judged steel scrap grade, then the non-standard steel scrap area ratio of each layer below the basic grade of the entire vehicle is counted, then the impurity ratio of each layer is calculated according to the result of impurity detection and division, and finally the non-standard steel scrap ratio and the impurity ratio of all sampling layers are weighted and summed respectively. For the condition that the same unqualified steel scrap or impurity is sampled for multiple times in different layers, the embodiment can perform deduplication operation in a way of intersection, union and mean solving, so that the unqualified steel scrap proportion kouzhong _ ratio and the impurity proportion kouza _ ratio of the whole vehicle are finally obtained. Then, the embodiment can estimate the mix degree mix _ ratio (between 0 and 1) of the scrap steel type of the whole vehicle according to the distribution of different grades of the whole vehicle, and then the net weight obtained by combining the difference of two weighing of the vehicle and the ratio ex _ kouzhong _ ratio of special devices such as an overlong piece and a closed container with special penalty are identified.
For the final estimation of the vehicle weight deduction and the gross deduction, the embodiment may adopt a regression calculation model for prediction, and may select two or more single factors or combinations of the ratios ex _ kouzhong _ ratio of the unqualified scrap steel to the kouzhong _ ratio, the impurities to the kouza _ ratio, the mixing degree mix _ ratio, the closed container and other special devices as input arguments of the regression algorithm model, and perform regression processing, so as to obtain the weight deduction and the gross deduction of the whole vehicle.
For the regression calculation model, a machine learning regression algorithm including but not limited to logistic regression, random forest, GDBT, xgboot, lightGBM, etc. may be selected, or a deep neural network regression model may be selected, and for the training of the regression model, the embodiment may perform regression learning by using historical unloading penalty records.
FIG. 8 is a schematic diagram of a logistic regression, according to an embodiment of the present invention. As shown in fig. 8, the dependent variable is processed by the regression algorithm model and increases with the increase of the independent variable, wherein the value range of the independent variable may be-5 to 4, and the value range of the dependent variable is 0.0 to 0.9.
In this embodiment, the yloact algorithm can be selected as the deep learning image instance segmentation algorithm, and is one of the current schemes for instance segmentation effect. FIG. 9 is a schematic diagram of a structure diagram of a Yolact algorithm according to an embodiment of the invention. As shown in fig. 9, a Non-Maximum Suppression (NMS) process is performed on a sample by using a Mask coefficient (Mask coefficient) of a high value or a low value in the sample, a process result is detected with the sample (prototype), a corp is performed on the obtained result, and the obtained result is determined with a target threshold to output a result satisfying a threshold condition.
In this embodiment, the yoract algorithm adds a mask branch on the basis of the existing one-stage detection network, and divides the whole task into two parallel subtasks:
1) a prototypemask branch, which can use the network structure of the FCN to generate the prototypemask, and does not involve a single instance (the single instance is obtained after the corp on the detection result);
2) and the target detection branch predicts the mask coefficient of each anchor so as to obtain the position of an example in the image, and then performs linear operation on the mask branch and the prototype mask so as to combine the results of the two branches.
In this embodiment, the above Yolact algorithm has the following advantages:
1) the processing speed is high;
2) the segmentation mask has high quality, and lossless characteristic information can be obtained because the posing operation in a two-stage method is not used, and the performance is better under the segmentation scene of a large target;
3) the modules are generalized, and the prototype generation and mask coefficient provided by the embodiment can be added to the existing detection network.
In this embodiment, the regression model for implementing the withholding and the withholding may use a Random forest, which is a classifier that trains and predicts the sample by using a plurality of trees, as shown in fig. 10. Fig. 10 is a schematic diagram of a network structure of a random forest according to an embodiment of the present invention, where an example (Instance) trains and predicts a classifier (Class-A, Class-B … … Class-B) using a plurality of trees (Tree-1, Tree-2 … … Tree-n), and performs a main vote (Majority) on the obtained classifier to obtain a Final classifier (Final-Class).
In this embodiment, can adopt unmanned aerial vehicle to remove the tracking mode, fixed camera dynamic adjustment mode, or imaging environment such as dynamic adjustment ambient light and light filling, obtain the image of steel scrap freight train.
The image recognition algorithm used in this embodiment may be a CNN-based image instance segmentation algorithm such as Mask-RCNN, SSD, YOLO, or a classified deep neural network algorithm such as resnet50 or densenet, and the ranking of scrap may also be implemented by a pattern search method, for example, a basic ranking database is constructed, an image similar to an image of a scrap car to be detected is found in the basic ranking database, and the most similar ranking is determined according to the similar image)
In this embodiment, for the estimation of deduction and deduction, the net weight and the purity of the whole vehicle can be calculated according to calculated proportion distribution of different levels and deduction and penalty rules of an owner; the most similar operation train number in the historical data can be matched in a picture comparison mode, and the deduction and penalty weight of the time can be estimated by referring to the deduction and penalty result of the corresponding historical train number.
In the embodiment, based on a deep learning image detection and segmentation algorithm, in the process of unloading the scrap steel, the scrap steel and sundries of different grades are detected and segmented for each layer of the scrap steel, so that the proportion of the steel and the type and the content of the sundries of each grade in the finished automobile steel are obtained, the weight of withholding the sundries is calculated by combining information such as an industry withholding rule, the net weight of the scrap steel and the like through big data statistics and regression calculation, and the reasonability and the interpretability of the whole estimation scheme are ensured.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required by the invention.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
Example 3
According to an embodiment of the present invention, there is also provided an object detection apparatus for implementing the object detection method shown in fig. 2.
Fig. 11 is a schematic diagram of an apparatus for detecting an object according to an embodiment of the present invention. As shown in fig. 11, the object detecting device 110 may include: a first obtaining unit 111, a first identifying unit 112, a first determining unit 113 and a first deducting unit 114.
A first acquisition unit 111 for acquiring at least one image of a target object to be detected.
The first identifying unit 112 is configured to identify a sub-object in the target object in the image, where the sub-object is an object to be removed from the target object.
A first determining unit 113 for determining a first weight of the sub-object.
The first deducting unit 114 is configured to deduct the first weight from the target weight of the target object to obtain a second weight.
It should be noted here that the first obtaining unit 111, the first identifying unit 112, the first determining unit 113, and the first deducting unit 114 correspond to steps S202 to S208 in embodiment 1, and the four units are the same as the corresponding steps in the implementation example and the application scenario, but are not limited to the disclosure in the first embodiment. It should be noted that the above units as a part of the apparatus may operate in the computer terminal 10 provided in the first embodiment.
According to an embodiment of the present invention, there is also provided an object detection apparatus for implementing the object detection method shown in fig. 3.
Fig. 12 is a schematic view of another object detection apparatus according to an embodiment of the present invention. As shown in fig. 13, the object detecting device 120 may include: a second obtaining unit 121, a second identifying unit 122, a second determining unit 123 and a second deducting unit 124.
A second acquisition unit 121 for acquiring at least one vehicle image of the vehicle object to be detected.
The second identifying unit 122 is configured to identify a sub-object in the vehicle image, where the sub-object is an object to be deducted from the vehicle object.
A second determining unit 123 for determining a first weight of the sub-object.
And a second deducting unit 124 for deducting the first weight from the target weight of the vehicle object to obtain a second weight.
It should be noted here that the second obtaining unit 121, the second identifying unit 122, the second determining unit 123 and the second subtracting unit 124 correspond to steps S302 to S308 in embodiment 1, and the four units are the same as the corresponding steps in the implementation example and the application scenario, but are not limited to the disclosure in the first embodiment. It should be noted that the above units as a part of the apparatus may operate in the computer terminal 10 provided in the first embodiment.
According to an embodiment of the present invention, there is also provided an object detection apparatus for implementing the object detection method shown in fig. 4.
Fig. 13 is a schematic view of another object detection apparatus according to an embodiment of the present invention. As shown in fig. 13, the object detecting device 130 may include: a third acquisition unit 131 and a display unit 132.
A third obtaining unit 131, configured to obtain at least one image of the target object to be detected in response to an image input instruction acting on the operation interface.
The display unit 132 is configured to display, in response to the object deduction instruction acting on the operation interface, a second weight of the target object on the operation interface, where the second weight is a weight obtained by deducting the first weight of a sub-object in the target object from the target weight of the target object, and the sub-object is an object to be deducted from the target object.
It should be noted here that the third acquiring unit 131 and the display unit 132 correspond to steps S402 to S404 in embodiment 1, and the two units are the same as the example and application scenarios realized by the corresponding steps, but are not limited to the disclosure of the first embodiment. It should be noted that the above units as a part of the apparatus may operate in the computer terminal 10 provided in the first embodiment.
According to an embodiment of the present invention, there is also provided an object detection apparatus for implementing the object detection method shown in fig. 5.
Fig. 14 is a schematic view of another object detection apparatus according to an embodiment of the present invention. As shown in fig. 14, the object detection apparatus 140 may include: a fourth obtaining unit 141, a third identifying unit 142, a third determining unit 143, a third deducting unit 144 and an output unit 145.
The fourth obtaining unit 141 is configured to obtain at least one image of the target object to be detected by calling the first interface, where the first interface includes the first parameter, and a parameter value of the first parameter is the image.
The third identifying unit 142 is configured to identify a sub-object in the target object in the image, where the sub-object is an object to be removed from the target object.
A third determining unit 143 for determining the first weight of the sub-object.
The third deducting unit 144 is configured to deduct the first weight from the target weight of the target object to obtain a second weight.
And an output unit 145, configured to output the second weight by invoking a second interface, where the second interface includes a second parameter, and a parameter value of the second parameter is the second weight.
It should be noted here that the fourth obtaining unit 141, the third identifying unit 142, the third determining unit 143, the third subtracting unit 144 and the output unit 145 correspond to steps S502 to S510 in embodiment 1, and five units are the same as the corresponding steps in the implementation example and the application scenario, but are not limited to the disclosure in the first embodiment. It should be noted that the above units as a part of the apparatus may operate in the computer terminal 10 provided in the first embodiment.
In this embodiment, the sub-objects of the target object are identified by identifying the image of the target object, and the weight of the sub-objects is subtracted from the target weight of the target object to obtain the final second weight, so that the purpose of deducting the weight of the target object is achieved, the technical problem that the weight of the sub-objects in the object cannot be subtracted is solved, and the technical effect of subtracting the weight of the sub-objects in the object is achieved.
Example 4
Embodiments of the present invention may provide an object detection system, which may include embodiments of the present invention may provide a computer terminal, which may be any one computer terminal device in a computer terminal group. Optionally, in this embodiment, the computer terminal may also be replaced with a terminal device such as a mobile terminal.
Optionally, in this embodiment, the computer terminal may be located in at least one network device of a plurality of network devices of a computer network.
In this embodiment, the computer terminal may execute the program code of the following steps in the processing method of the product data of the application program: acquiring at least one image of a target object to be detected; identifying a sub-object in the target object in the image, wherein the sub-object is an object to be deducted from the target object; determining a first weight of the sub-object; the first weight is subtracted from the target weight of the target object to obtain a second weight.
Alternatively, fig. 15 is a block diagram of a computer terminal according to an embodiment of the present invention. As shown in fig. 15, the computer terminal a may include: one or more processors 1502 (only one of which is shown), a memory 1504, and a transmission 1506.
The memory may be configured to store software programs and modules, such as program instructions/modules corresponding to the object detection method and apparatus in the embodiments of the present invention, and the processor executes various functional applications and data processing by running the software programs and modules stored in the memory, so as to implement the object detection method. The memory may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory may further include memory remotely located from the processor, and these remote memories may be connected to terminal a through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The processor can call the information and application program stored in the memory through the transmission device to execute the following steps: acquiring a first ratio of the area of the sub-object to the area of the target object; a first weight is determined based on the first fraction.
The processor can call the information and application program stored in the memory through the transmission device to execute the following steps: determining a first weight based on the first proportion and at least one of the following target information: the hybrid coefficient of the target material, the net weight of the target object and the second proportion of the target type object in the target object, wherein the hybrid coefficient is used for representing the degree that the target object is mixed with the target material, and the net weight is the difference of two weighing results of the target object.
The processor can call the information and application program stored in the memory through the transmission device to execute the following steps: and performing regression processing on the first proportion and the target information to obtain a first weight.
The processor can call the information and application program stored in the memory through the transmission device to execute the following steps: carrying out weighted summation on a plurality of first ratios corresponding to a plurality of images to obtain a summation result; the first weight is determined based on the result of the summation.
The processor can call the information and application program stored in the memory through the transmission device to execute the following steps: identifying a first area where the child object is located in the image; carrying out segmentation processing on the first area to obtain a segmentation result; the area of the sub-object is determined based on the segmentation result.
The processor can call the information and application program stored in the memory through the transmission device to execute the following steps: carrying out duplicate removal processing on a plurality of sub-objects corresponding to a plurality of images to obtain duplicate removal results; a first weight is determined based on the deduplication result.
The processor can call the information and application program stored in the memory through the transmission device to execute the following steps: identifying a first sub-object in the image, wherein the material of the first sub-object is a target material which does not reach a target level; and/or identifying a second sub-object in the image, wherein the material of the second sub-object is a material except the target material.
The processor can call the information and application program stored in the memory through the transmission device to execute the following steps: extracting a second area from the image, wherein the second area is an area covered by target materials in the target object; performing segmentation processing on the second area according to the grade of the target material to obtain a plurality of sub-areas; and determining a target grade based on the grade corresponding to each sub-area.
The processor can call the information and application program stored in the memory through the transmission device to execute the following steps: a target level is determined based on the area of each sub-region and the corresponding level.
The processor can call the information and application program stored in the memory through the transmission device to execute the following steps: determining a corresponding sub-grade of each image based on the area and the corresponding grade of each sub-region in each image to obtain a plurality of sub-grades; and performing fusion processing on the plurality of sub-grades and the corresponding weights to obtain a target grade.
As an alternative, the processor may call the information and application stored in the memory through the transmission device to execute the following steps: acquiring at least one vehicle image of a vehicle object to be detected; identifying a sub-object in the vehicle image, wherein the sub-object is an object to be deducted from the vehicle object; determining a first weight of the sub-object; the first weight is subtracted from the target weight of the vehicle object to obtain a second weight.
As an alternative, the processor may call the information and application stored in the memory through the transmission device to execute the following steps: responding to an image input instruction acting on an operation interface, and acquiring at least one image of a target object to be detected; and responding to an object deduction instruction acting on the operation interface, and displaying a second weight of the target object on the operation interface, wherein the second weight is the weight obtained by deducting the first weight of a sub-object in the target object from the target weight of the target object, and the sub-object is an object to be deducted from the target object.
As an alternative, the processor may call the information and application stored in the memory through the transmission device to execute the following steps: acquiring at least one image of a target object to be detected by calling a first interface, wherein the first interface comprises a first parameter, and a parameter value of the first parameter is the image; identifying a sub-object in the target object in the image, wherein the sub-object is an object to be deducted from the target object; determining a first weight of the sub-object; deducting the first weight from the target weight of the target object to obtain a second weight; and outputting the second weight by calling a second interface, wherein the second interface comprises a second parameter, and the parameter value of the second parameter is the second weight.
The embodiment of the invention provides a method for detecting an object. Acquiring at least one image of a target object to be detected; identifying a sub-object in the target object in the image, wherein the sub-object is an object to be deducted from the target object; determining a first weight of the sub-object; the first weight is subtracted from the target weight of the target object to obtain a second weight. That is to say, the present application identifies the image of the target object, identifies the sub-objects of the target object, and subtracts the weight of the sub-objects from the target weight of the target object to obtain the final second weight, thereby achieving the purpose of deducting the weight of the target object, solving the technical problem that the weight of the sub-objects in the object cannot be subtracted, and achieving the technical effect of subtracting the weight of the sub-objects in the object.
It can be understood by those skilled in the art that the structure shown in fig. 15 is only an illustration, and the computer terminal a may also be a terminal device such as a smart phone (e.g., an Android phone, an iOS phone, etc.), a tablet computer, a palmtop computer, a Mobile Internet Device (MID), a PAD, and the like. Fig. 15 is not intended to limit the structure of the computer terminal. For example, the computer terminal a may also include more or fewer components (e.g., network interfaces, display devices, etc.) than shown in fig. 15, or have a different configuration than shown in fig. 15.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by a program instructing hardware associated with the terminal device, where the program may be stored in a computer-readable storage medium, and the storage medium may include: flash disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
Example 5
Embodiments of the present invention also provide a computer-readable storage medium. Optionally, in this embodiment, the computer-readable storage medium may be used to store the program code executed by the processing method of the product data provided in the first embodiment.
Optionally, in this embodiment, the computer-readable storage medium may be located in any one of a group of computer terminals in a computer network, or in any one of a group of mobile terminals.
Optionally, in this embodiment, the computer readable storage medium is configured to store program code for performing the following steps: acquiring at least one image of a target object to be detected; identifying a sub-object in the target object in the image, wherein the sub-object is an object to be deducted from the target object; determining a first weight of the sub-object; the first weight is subtracted from the target weight of the target object to obtain a second weight.
Optionally, the computer readable storage medium is further arranged to store program code for performing the steps of: acquiring a first ratio of the area of the sub-object to the area of the target object; a first weight is determined based on the first fraction.
Optionally, the computer readable storage medium is further arranged to store program code for performing the steps of: determining a first weight based on the first proportion and at least one of the following target information: the hybrid coefficient of the target material, the net weight of the target object and the second proportion of the target type object in the target object, wherein the hybrid coefficient is used for representing the degree that the target object is mixed with the target material, and the net weight is the difference of two weighing results of the target object.
Optionally, the computer readable storage medium is further arranged to store program code for performing the steps of: and performing regression processing on the first proportion and the target information to obtain a first weight.
Optionally, the computer readable storage medium is further arranged to store program code for performing the steps of: carrying out weighted summation on a plurality of first ratios corresponding to a plurality of images to obtain a summation result; the first weight is determined based on the result of the summation.
Optionally, the computer readable storage medium is further arranged to store program code for performing the steps of: identifying a first area where the child object is located in the image; carrying out segmentation processing on the first area to obtain a segmentation result; the area of the sub-object is determined based on the segmentation result.
Optionally, the computer readable storage medium is further arranged to store program code for performing the steps of: carrying out duplicate removal processing on a plurality of sub-objects corresponding to a plurality of images to obtain duplicate removal results; a first weight is determined based on the deduplication result.
Optionally, the computer readable storage medium is further arranged to store program code for performing the steps of: identifying a first sub-object in the image, wherein the material of the first sub-object is a target material which does not reach a target level; and/or identifying a second sub-object in the image, wherein the material of the second sub-object is a material except the target material.
Optionally, the computer readable storage medium is further arranged to store program code for performing the steps of: extracting a second area from the image, wherein the second area is an area covered by target materials in the target object; performing segmentation processing on the second area according to the grade of the target material to obtain a plurality of sub-areas; and determining a target grade based on the grade corresponding to each sub-area.
Optionally, the computer readable storage medium is further arranged to store program code for performing the steps of: a target level is determined based on the area of each sub-region and the corresponding level.
Optionally, the computer readable storage medium is further arranged to store program code for performing the steps of: determining a corresponding sub-grade of each image based on the area and the corresponding grade of each sub-region in each image to obtain a plurality of sub-grades; and performing fusion processing on the plurality of sub-grades and the corresponding weights to obtain a target grade.
As an alternative example, the computer readable storage medium is arranged to store program code for performing the steps of: acquiring at least one vehicle image of a vehicle object to be detected; identifying a sub-object in the vehicle image, wherein the sub-object is an object to be deducted from the vehicle object; determining a first weight of the sub-object; the first weight is subtracted from the target weight of the vehicle object to obtain a second weight.
As an alternative example, the computer readable storage medium is arranged to store program code for performing the steps of: responding to an image input instruction acting on an operation interface, and acquiring at least one image of a target object to be detected; and responding to an object deduction instruction acting on the operation interface, and displaying a second weight of the target object on the operation interface, wherein the second weight is the weight obtained by deducting the first weight of a sub-object in the target object from the target weight of the target object, and the sub-object is an object to be deducted from the target object.
As an alternative example, the computer readable storage medium is arranged to store program code for performing the steps of: acquiring at least one image of a target object to be detected by calling a first interface, wherein the first interface comprises a first parameter, and a parameter value of the first parameter is the image; identifying a sub-object in the target object in the image, wherein the sub-object is an object to be deducted from the target object; determining a first weight of the sub-object; deducting the first weight from the target weight of the target object to obtain a second weight; and outputting the second weight by calling a second interface, wherein the second interface comprises a second parameter, and the parameter value of the second parameter is the second weight.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (14)

1. A method of detecting an object, comprising:
acquiring at least one image of a target object to be detected;
identifying a sub-object in the target object in the image, wherein the sub-object is an object to be deducted from the target object;
determining a first weight of the sub-object;
and deducting the first weight from the target weight of the target object to obtain a second weight.
2. The method of claim 1, wherein determining a first weight of the sub-object comprises:
acquiring a first ratio of the area of the sub-object to the area of the target object;
determining the first weight based on the first fraction.
3. The method of claim 2, wherein determining the first weight based on the first fraction comprises:
determining the first weight based on the first fraction and at least one of the following target information:
a confounding factor for a target material, a net weight of the target object, a second proportion of a target type object in the target object, wherein the confounding factor is indicative of a degree to which the target object is blended with the target material, and the net weight is a difference between two weighing results for the target object.
4. The method of claim 3, wherein determining the first weight based on the first fraction and the target information comprises:
and performing regression processing on the first proportion and the target information to obtain the first weight.
5. The method of claim 2, wherein the at least one image is plural in number, and determining the first weight based on the first fraction comprises:
carrying out weighted summation on a plurality of first ratios corresponding to a plurality of images to obtain a summation result;
determining the first weight based on the summation result.
6. The method of claim 2, further comprising:
identifying a first area in the image where the child object is located;
carrying out segmentation processing on the first region to obtain a segmentation result;
determining an area of the sub-object based on the segmentation result.
7. The method of claim 1, wherein the at least one image is plural in number, and wherein determining the first weight of the sub-object comprises:
carrying out duplication elimination processing on a plurality of sub-objects corresponding to the plurality of images to obtain duplication elimination results;
determining the first weight based on the deduplication result.
8. The method of claim 1, wherein identifying sub-objects in the target object in the image comprises:
identifying a first sub-object in the image, wherein the material of the first sub-object is a target material which does not reach a target level; and/or
And identifying a second sub-object in the image, wherein the material of the second sub-object is a material except the target material.
9. The method of claim 8, further comprising:
extracting a second region from the image, wherein the second region is a region covered by the target material in the target object;
performing segmentation processing on the second area according to the grade of the target material to obtain a plurality of sub-areas;
determining the target level based on an area of each of the sub-regions and the corresponding level.
10. A method of detecting an object, comprising:
acquiring at least one vehicle image of a vehicle object to be detected;
identifying a sub-object in the vehicle image, wherein the sub-object is an object to be deducted from the vehicle object;
determining a first weight of the sub-object;
subtracting the first weight from a target weight of the vehicle object to obtain a second weight.
11. A method of detecting an object, comprising:
responding to an image input instruction acting on an operation interface, and acquiring at least one image of a target object to be detected;
and responding to an object deduction instruction acting on the operation interface, and displaying a second weight of the target object on the operation interface, wherein the second weight is the weight obtained by deducting the first weight of a sub-object in the target object from the target weight of the target object, and the sub-object is an object to be deducted from the target object.
12. A method of detecting an object, comprising:
acquiring at least one image of a target object to be detected by calling a first interface, wherein the first interface comprises a first parameter, and a parameter value of the first parameter is the image;
identifying a sub-object in the target object in the image, wherein the sub-object is an object to be deducted from the target object;
determining a first weight of the sub-object;
deducting the first weight from the target weight of the target object to obtain a second weight;
and outputting the second weight by calling a second interface, wherein the second interface comprises a second parameter, and a parameter value of the second parameter is the second weight.
13. A computer-readable storage medium, comprising a stored program, wherein the program, when executed by a processor, controls an apparatus in which the computer-readable storage medium is located to perform the method of any of claims 1-12.
14. A system for detecting an object, comprising:
a processor;
a memory coupled to the processor for providing instructions to the processor for processing the following processing steps: acquiring at least one image of a target object to be detected; identifying a sub-object in the target object in the image, wherein the sub-object is an object to be deducted from the target object; determining a first weight of the sub-object; and deducting the first weight from the target weight of the target object to obtain a second weight.
CN202110886785.7A 2021-08-03 2021-08-03 Object detection method, storage medium and system Pending CN113869103A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110886785.7A CN113869103A (en) 2021-08-03 2021-08-03 Object detection method, storage medium and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110886785.7A CN113869103A (en) 2021-08-03 2021-08-03 Object detection method, storage medium and system

Publications (1)

Publication Number Publication Date
CN113869103A true CN113869103A (en) 2021-12-31

Family

ID=78990196

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110886785.7A Pending CN113869103A (en) 2021-08-03 2021-08-03 Object detection method, storage medium and system

Country Status (1)

Country Link
CN (1) CN113869103A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116597364A (en) * 2023-03-29 2023-08-15 阿里巴巴(中国)有限公司 Image processing method and device
CN117235533A (en) * 2023-11-10 2023-12-15 腾讯科技(深圳)有限公司 Object variable analysis method, device, computer equipment and storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116597364A (en) * 2023-03-29 2023-08-15 阿里巴巴(中国)有限公司 Image processing method and device
CN116597364B (en) * 2023-03-29 2024-03-29 阿里巴巴(中国)有限公司 Image processing method and device
CN117235533A (en) * 2023-11-10 2023-12-15 腾讯科技(深圳)有限公司 Object variable analysis method, device, computer equipment and storage medium
CN117235533B (en) * 2023-11-10 2024-03-01 腾讯科技(深圳)有限公司 Object variable analysis method, device, computer equipment and storage medium

Similar Documents

Publication Publication Date Title
CN108596277B (en) Vehicle identity recognition method and device and storage medium
US11823365B2 (en) Automatic image based object damage assessment
CN108388888B (en) Vehicle identification method and device and storage medium
KR20190069457A (en) IMAGE BASED VEHICLES LOSS EVALUATION METHOD, DEVICE AND SYSTEM,
CN113869103A (en) Object detection method, storage medium and system
CN111415106A (en) Truck loading rate identification method, device, equipment and storage medium
US20170004384A1 (en) Image based baggage tracking system
CN107844794A (en) Image-recognizing method and device
CN111401162A (en) Illegal auditing method for muck vehicle, electronic device, computer equipment and storage medium
Chatterjee et al. Intelligent Road Maintenance: a Machine Learning Approach for surface Defect Detection.
CN113743210A (en) Image recognition method and scrap grade recognition method
CN114332702A (en) Target area detection method and device, storage medium and electronic equipment
EP3113091A1 (en) Image based baggage tracking system
CN114492799A (en) Convolutional neural network model pruning method and device, electronic equipment and storage medium
CN114494994A (en) Vehicle abnormal aggregation monitoring method and device, computer equipment and storage medium
CN112562105A (en) Security check method and device, storage medium and electronic equipment
CN113810605A (en) Target object processing method and device
WO2024078112A1 (en) Method for intelligent recognition of ship outfitting items, and computer device
CN112183303A (en) Transformer equipment image classification method and device, computer equipment and medium
CN115731179A (en) Track component detection method, terminal and storage medium
CN114445767B (en) Method and system for detecting foreign matters transmitted by transmission belt
CN116052090A (en) Image quality evaluation method, model training method, device, equipment and medium
Xie et al. Image fusion based on kernel estimation and data envelopment analysis
CN114359819A (en) Image processing method, apparatus, device, storage medium, and computer program product
CN112818832B (en) Weak supervision object positioning device and method based on component perception

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination