CN113515985A - Self-service weighing system, weighing detection method, equipment and storage medium - Google Patents

Self-service weighing system, weighing detection method, equipment and storage medium Download PDF

Info

Publication number
CN113515985A
CN113515985A CN202010626670.XA CN202010626670A CN113515985A CN 113515985 A CN113515985 A CN 113515985A CN 202010626670 A CN202010626670 A CN 202010626670A CN 113515985 A CN113515985 A CN 113515985A
Authority
CN
China
Prior art keywords
vehicle
weighing
image
area
behavior
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010626670.XA
Other languages
Chinese (zh)
Other versions
CN113515985B (en
Inventor
白德桃
魏溪含
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN202010626670.XA priority Critical patent/CN113515985B/en
Publication of CN113515985A publication Critical patent/CN113515985A/en
Application granted granted Critical
Publication of CN113515985B publication Critical patent/CN113515985B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/017Detecting movement of traffic to be counted or controlled identifying vehicles
    • G08G1/0175Detecting movement of traffic to be counted or controlled identifying vehicles by photographing vehicles, e.g. when violating traffic rules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/02Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]
    • H04L67/025Protocols based on web technology, e.g. hypertext transfer protocol [HTTP] for remote control or remote monitoring of applications
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/12Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/52Network services specially adapted for the location of the user terminal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Traffic Control Systems (AREA)

Abstract

The embodiment of the application provides a self-service weighing system, a weighing detection method, equipment and a storage medium. In the self-service weighing system, the weighing process of the vehicle is shot, the shot image data are sent to the server side, and the server side can identify the identification of the vehicle and analyze the weighing behavior of the vehicle based on the image data. Based on the identified identification of the vehicle, the service end can automatically record the weight data of the vehicle, and the dependence on the operation of manually inputting weighing data is reduced. Meanwhile, when the weighing behaviors of the vehicle are analyzed, the server can timely discover the abnormal cheating behaviors of the weighing vehicle, so that the weighing process of the vehicle is intelligently managed, and the reliability of an autonomous weighing mode is improved.

Description

Self-service weighing system, weighing detection method, equipment and storage medium
Technical Field
The application relates to the technical field of computer vision, in particular to a self-service weighing system, a weighing detection method, equipment and a storage medium.
Background
During the transportation of cargo, the transportation vehicle needs to be weighed by a weighbridge (also called weighing scale) to obtain vehicle load information. In the traditional weighing mode, the weighing condition of the vehicle is usually monitored manually, and the license plate of the weighing vehicle is manually recorded. This approach is labor intensive and risks human intervention in the weighing process.
At present, a self-service weighing mode exists, and vehicle weighing detection is realized by self-service card swiping of a driver under the unattended condition. However, the self-service weighing mode still has the defect of poor reliability. Therefore, a new solution is yet to be proposed.
Disclosure of Invention
Aspects of the present application provide a self-service weighing system, a weighing detection method, a device, and a storage medium, to improve reliability of self-service weighing.
The embodiment of the application provides a self-service weighing system, includes: the system comprises a weighbridge scale, image acquisition equipment and a server side; the weighbridge scale is arranged in the weighing area and used for detecting the weight of a weighed vehicle and sending detected weight data to the server; the image acquisition equipment is arranged above the weighing area and used for shooting image data when the vehicle is weighing and sending the image data to the server; the server is used for: identifying an identity of the vehicle and a weighing behavior of the vehicle from the image data; correspondingly recording the identification of the vehicle and the weight data, and managing the weighing process of the vehicle in the weighing area according to the weighing behavior of the vehicle.
An embodiment of the present application further provides a data processing method, including: acquiring image data and an execution result of a mobile object in a designated area when executing a target task; extracting the features of the image data, and identifying the position and the identification of the moving object according to the extracted image features; identifying behavior characteristics of the mobile object when the mobile object executes the target task according to the position of the boundary of the designated area and the position of the mobile object; and correspondingly recording the execution result and the identification of the mobile object, and managing the processing flow associated with the target task according to the behavior characteristics.
The embodiment of the application further provides a weighing detection method, which comprises the following steps: acquiring image data and weight data of a vehicle when the vehicle is subjected to weighing in a weighing area; identifying an identity of the vehicle and a weighing behavior of the vehicle from the image data; correspondingly recording the identification of the vehicle and the weight data, and managing the weighing process of the vehicle in the weighing area according to the weighing behavior of the vehicle.
The embodiment of the present application further provides a vehicle inspection method, including: acquiring image data and a detection result of a vehicle in a first detection area during vehicle detection; identifying an identity of the vehicle and a verification behavior of the vehicle from the image data; and correspondingly recording the identification of the vehicle and the inspection result, and managing the inspection process of the vehicle according to the inspection behavior of the vehicle.
An embodiment of the present application further provides an electronic device, including: a memory and a processor; the memory is to store one or more computer instructions; the processor is to execute the one or more computer instructions to: the weighing detection method provided by the embodiment of the application is executed.
Embodiments of the present application further provide a computer-readable storage medium storing a computer program, where the computer program, when executed by a processor, can implement the weighing detection method provided in the embodiments of the present application.
In the embodiment of the application, the weighing process of the vehicle is shot, the shot image data are sent to the server side, and the server side can identify the identification of the vehicle and analyze the weighing behavior of the vehicle based on the image data. Based on the identified identification of the vehicle, the service end can automatically record the weight data of the vehicle, and the dependence on the operation of manually inputting weighing data is reduced. Meanwhile, when the weighing behaviors of the vehicle are analyzed, the server can timely discover the abnormal cheating behaviors of the weighing vehicle, so that the weighing process of the vehicle is intelligently managed, and the reliability of an autonomous weighing mode is improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
FIG. 1 is a schematic block diagram of a self-service weighing system provided in accordance with an exemplary embodiment of the present application;
FIG. 2 is a schematic diagram of a self-service weighing system provided in accordance with another exemplary embodiment of the present application;
FIG. 3 is a schematic diagram of a neural network model provided in an exemplary embodiment of the present application;
FIG. 4 is a schematic diagram of an algorithm for analyzing weighing behavior provided by an exemplary embodiment of the present application;
FIG. 5a is a schematic flow chart of a method for detecting weighing in accordance with an exemplary embodiment of the present application;
FIG. 5b is a schematic flow chart of a method for detecting weighing in accordance with another exemplary embodiment of the present application;
fig. 6 is a schematic structural diagram of an electronic device according to an exemplary embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail and completely with reference to the following specific embodiments of the present application and the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In many scenarios, when a vehicle transports goods, it needs to be weighed by weighing in order to obtain vehicle load information. For example, when goods are shipped, a goods manufacturer needs to weigh the shipped goods loaded on a vehicle to record shipment data. Vehicles are weighed before they enter certain road segments (e.g., freeways or roads requiring special maintenance) in order to calculate the associated road use costs.
In the cement industry, for example, some cement manufacturers currently use the traditional weighing method to weigh vehicles. In this conventional approach, it is necessary to manually monitor the compliance of the weighing behavior of the vehicle and manually enter the weighing license plate. The method has low efficiency, and has the problems of disordered vehicle management, easy occurrence of human intervention and the like. In order to improve the working efficiency, effectively standardize the weighing process, manage the weighing data and enhance the enterprise informatization management level, the demand of the unattended intelligent metering weighing system is more and more urgent.
Currently, there are unattended weighing systems in the market that use IC (Integrated Circuit Chip) card swiping to identify vehicles. In such a system, it is necessary to issue an IC card to a customer (transport driver) to realize one car one card. When the customer is weighing, the customer can self-service swipe the card and weigh. The mode realizes unmanned weighing to a certain extent, but still has a plurality of loopholes, and further cannot realize a self-service weighing mode with higher reliability. For example, although the IC card is one card for one vehicle, the IC card exchange and the IC card borrowing are likely to occur, and further, the IC card and the license plate are not consistent. For example, the handling of IC cards requires time and expense costs, and it is not easy to efficiently handle the weighing problem of the newly added vehicle. In addition, this method is only used for monitoring the IC card, and is not used for monitoring the vehicle to be weighed, so that the behavior of the vehicle is not in accordance with the weighing requirement in the weighing process.
In order to solve the technical problem of poor reliability in the above technical solutions, an embodiment of the present application provides a self-service weighing system, which will be described below with reference to the accompanying drawings.
Fig. 1 is a schematic structural diagram of a self-service weighing system according to an exemplary embodiment of the present application, and as shown in fig. 1, the self-service weighing system 100 includes: a weighbridge 101, an image acquisition device 102 and a server 103.
The weighbridge 101 is also called a weighbridge or truck scale, and is a main weighing device for weighing bulk goods. The weighbridge scale 101 is arranged in the weighing area and used for detecting the weight of a weighed vehicle and sending detected weight data to the server 103.
The weighing area is an area defined on the ground or other plane and used for carrying out weighing tasks. Typically, the weighing areas have a regular shape and are delineated by clear lines for easy identification and differentiation.
The image acquisition equipment 102 is arranged above the weighing area and used for shooting image data when the vehicle is weighing and sending the shot image data to the server 103. The image data captured by the image capturing device 102 may be a video stream captured continuously or an image sequence captured at intervals, which is not limited in this embodiment.
Wherein, server 103, respectively with weigh-bridge 101 and image acquisition equipment 102 communication connection for: the weight data of the vehicle transmitted by the weigh scale 101 is received, and the image data transmitted by the image capturing apparatus 102 is received. From the image data, an identification of the vehicle and a weighing behavior of the vehicle are identified. And then correspondingly recording the identification and weight data of the vehicle, and managing the weighing process of the vehicle in the weighing area according to the weighing behavior of the vehicle.
Wherein, the weighing process can be set according to the actual weighing demand. For example, it may include: a weighing process after the weighing operation is completed, an early warning process triggered by an abnormal weighing behavior, a weighing data review process, a net weight and tare weight calibration process, and the like, which are not limited in this embodiment.
The server 103 may be implemented as a server, including a conventional server, a cloud host, a virtual center, and other devices, which is not limited in this embodiment. The server device mainly includes a processor, a hard disk, a memory, a system bus, and the like, and is similar to a general computer architecture, and is not described in detail.
In the self-service weighing system 100, the communication between the weighbridge 101 and the server 103 and the communication between the image acquisition device 102 and the server 103 can be performed in a wired communication manner and a wireless communication manner. The WIreless communication mode includes short-distance communication modes such as bluetooth, ZigBee, infrared, WiFi (WIreless-Fidelity), long-distance WIreless communication modes such as LORA, and WIreless communication mode based on a mobile network. When the mobile network is connected through a mobile network communication, the network format of the mobile network may be any one of 2G (gsm), 2.5G (gprs), 3G (WCDMA, TD-SCDMA, CDMA2000, UTMS), 4G (LTE), 4G + (LTE +), 5G, WiMax, and the like, which is not limited in this embodiment.
In the embodiment, the weighing process of the vehicle is shot, and the shot image data is sent to the server, and the server can identify the identification of the vehicle and analyze the weighing behavior of the vehicle based on the image data. Based on the identified identification of the vehicle, the service end can automatically record the weight data of the vehicle, and the dependence on the operation of manually inputting weighing data is reduced. Meanwhile, when the weighing behaviors of the vehicle are analyzed, the server can timely discover the abnormal cheating behaviors of the weighing vehicle, so that the weighing process of the vehicle is intelligently managed, and the reliability of an autonomous weighing mode is improved.
In some optional embodiments, in the self-service weighing system 100, when managing the weighing process of the vehicle in the weighing area according to the weighing behavior of the vehicle, the service end 103 may determine whether the weighing behavior of the vehicle meets the weighing requirement according to the weighing behavior of the vehicle, and issue a corresponding instruction to other devices in the self-service weighing system 100 according to the determination result.
As shown in fig. 2, the self-service weighing system 100 further includes: a barrier device 104. The barrier gate is also called a car stopper, and is a passage entrance and exit management device used for limiting the running of a motor vehicle on a road so as to manage the entrance and the exit of the vehicle. The barrier device 104 may implement the lifting bar via a wireless remote control.
Based on the above, the service end 103 may start the weighing process and send an opening instruction to the barrier gate device 104 when determining that the weighing behavior of the vehicle meets the set weighing requirement. After receiving the opening instruction, the barrier gate device 104 may open the weighing channel according to the opening instruction sent by the server 103. For example, when the barrier device 104 includes a lift bar, the lift bar may be controlled to lift to allow passage of a vehicle.
As shown in fig. 2, the self-service weighing system 100 further includes: an alarm device 105. The warning device 105 may be implemented as a multimedia device, such as a microphone, an audio/video player, etc., or may also be implemented as a terminal device of a driver, such as a mobile phone, a vehicle-mounted terminal, etc., which is not limited in this embodiment.
Based on the above, the server 103 may send an alarm instruction to the alarm device 105 when determining that the weighing behavior of the vehicle does not meet the set weighing requirement. The warning device 105 may issue a warning prompt signal according to the warning instruction to prompt the vehicle to correct the weighing behavior.
In the self-service weighing system 100, the image capturing Device 102 may be implemented as various electronic devices capable of achieving high-definition shooting, including but not limited to electronic devices that perform imaging based on a CCD (Charge-coupled Device) image sensor or a CMOS (Complementary Metal Oxide Semiconductor) image sensor, such as a high-speed video camera, a rotary video camera, an infrared night vision camera, and the like, and will not be described in detail.
Optionally, the number of the image capturing devices 102 may be one or more, and the embodiment is not limited.
In some alternative embodiments, the image capturing device 102 may include a high-definition camera with a large field of view, which may be located above the center of the weighing area, and the shooting range of the high-definition camera may cover the weighing area. Further, the behavior of the vehicle entering or exiting the weighing area may be photographed.
In other alternative embodiments, as shown in fig. 1 and 2, the image capturing apparatus 102 may include a first image capturing apparatus 1021 and a second image capturing apparatus 1022. The first image capturing device 1021 is installed above a weighing position of the weighing area, and is used for capturing a driving state of the tail of the vehicle at the weighing position. The second image capturing device 1022 is installed above a weighing down position of the weighing area, and is used to photograph a driving state of the head of the vehicle at the weighing down position. Here, for convenience of description and distinction, the image captured by the first image capturing apparatus 1021 is described as a first image, and the image captured by the second image capturing apparatus 1022 is described as a second image. That is to say, the image data received by the server 103 is acquired by two image acquisition devices, which is beneficial to more comprehensively and more accurately capturing the behavior characteristics of the vehicle in the weighing area.
It should be understood that, in the present embodiment, "first" and "second" are used to define the device and the image, and are only used for convenience of description and distinction, and do not represent information such as sequence, level, quantity, and the like.
In some optional embodiments, the server 103 may analyze the received image data based on a machine learning algorithm to identify the identity and weighing behavior of the vehicle. As will be exemplified below.
In this embodiment, a neural network model may be trained based on a machine learning algorithm, and the neural network model is stored at the server 103. After receiving the image data sent by the image acquisition device 102, the server 103 may input the image data into the neural network model. As shown in fig. 3, the neural network model includes a feature extraction network (feature extractor), a license plate detection network, and a license plate number recognition network.
And inputting the image data into the neural network model, and then entering a feature extraction network. And in the feature extraction network, feature extraction is carried out on the image data according to a pre-trained model parameter and a feature extraction algorithm to obtain image features.
Alternatively, the feature extraction network may be implemented as a Backbone ResNet18(Backbone ResNet18) implementation as shown in fig. 3. Of course, in other alternative embodiments, the feature extraction network may also be implemented as ResNet18, ResNet34, ResNet50, or the like, which is not limited in this embodiment. Preferably, light-weight ResNet18 can be used for feature extraction, which is beneficial to quickly extracting image features, and the extracted image features have high stability.
The image characteristics can be input into the license plate detection network after being output by the characteristic extraction network. In the license plate detection network, the image characteristics are calculated according to pre-trained model parameters and a license plate detection algorithm so as to detect the license plate from the image data and output the position of the license plate. Alternatively, the license plate detection network is implemented as a classification predictor (Regression head) as shown in fig. 3. Based on the license plate position output by the license plate detection network, the local image characteristics corresponding to the license plate can be determined from the image characteristics output by the characteristic extraction network. The local image feature is a feature expression of a local image region corresponding to the license plate. As shown in fig. 3, ROI Align (a feature clustering algorithm) can be used to map image features into fixed-size local image features based on the license plate position output by Regression head.
The acquired local image features are input into a license plate number recognition network. In the license plate number identification network, the license plate number expressed by local image characteristics is identified according to a pre-trained model parameter license plate number identification algorithm so as to obtain the license plate number contained in the image data.
Alternatively, the license plate number recognition network may be implemented as a regression predictor (regression head) as shown in fig. 3. The reconfiguration head can be implemented by a plurality of linear classification layers to classify a plurality of characters on the license plate. In the illustration of fig. 3, the reconfiguration head is implemented as 7 linear classification fully-connected layers, and it should be understood that, in practice, the number of the linear classification fully-connected layers may be adjusted according to the number of characters included in the license plate, and will not be described again.
Based on the license plate number identified by the embodiment, the server 103 can automatically and quickly acquire the license plate number of the weighing vehicle, and the license plate number corresponds to the weight data of the vehicle sent by the weighbridge scale 101 one by one, so that the manual entry of the license plate number is not needed, the weighing waiting time of the vehicle is greatly shortened, and the situation of peak congestion is avoided.
Based on the above embodiments, the server 103 may further analyze whether the weighing behavior of the vehicle meets the weighing requirement based on the identified license plate number.
Alternatively, for each weighing operation, the server 103 may identify all license plate numbers in the image data at the time of the weighing operation. If the image data contains a plurality of different license plate numbers, the weighing operation can be considered to be executed by a plurality of vehicles together, namely: the vehicle has the following and weighing actions in the weighing area. For example, if the license plate number M is recognized from the image of the vehicle tail captured by the first image capturing device 1021 and the license plate number N is recognized from the image of the vehicle head captured by the second image capturing device 1022, it can be considered that the vehicle of the license plate number M is weighing along with the vehicle of the license plate number N, and both of them have cheating behaviors of weighing.
Optionally, for each weighing operation, the server 103 may query a historical weighing record of the license plate number after identifying the license plate number of the vehicle in the weighing operation; if the license plate number of the vehicle has a plurality of weighing records associated with the weighing area within a set time range, the vehicle can be determined to have a plurality of weighing behaviors. The set time range may be set according to the actual weighing operation requirement, and the embodiment is not limited. For example, the time period may be 30 minutes, 1 hour, 6 hours, or 24 hours, and thus, the description thereof is omitted. For example, if a cement transporter repeatedly weighs over a weighing area for 1 hour, it is considered that abnormal weighing behavior has occurred.
Next, an alternative embodiment of the server 103 identifying the weighing behavior of the vehicle according to the image data will be further described with reference to the neural network model illustrated in fig. 3.
In some optional embodiments, the neural network model further comprises: vehicle detection network (Classification head). As shown in fig. 3, the image features may be input to the vehicle detection network after being output by the feature extraction network. In the vehicle detection network, the image features are calculated according to pre-trained model parameters and a position extraction algorithm so as to detect the position of the vehicle, and a detection frame of the vehicle is marked in the image data according to the position of the vehicle. Here, the detection frame of the vehicle marked in the image data may refer to (x0, y0, x1, y1) shown in fig. 4.
The image data input into the neural network model includes a first image and a second image, and accordingly, when the detection frame of the vehicle is marked in the image data, the detection frame of the vehicle can be marked on the first image according to the vehicle detection result of the first image, and the detection frame of the vehicle can be marked on the second image according to the vehicle detection result of the second image. After the detection frame of the vehicle is obtained, the server 103 may analyze the weighing behavior of the vehicle according to the detection frame of the vehicle and the boundary of the weighing area.
It is noted that in some alternative embodiments, the boundary of the weighing area may be identified by the server 103 based on an image recognition algorithm. For example, when the weighing area is in a shape that is outlined by lines, the boundary lines of the weighing area can be identified from the image data of the vehicle when the vehicle is weighing, and then the identified boundary lines are corrected by using the manhattan algorithm. Based on the corrected boundary line, the boundary of the weighing area may be determined.
In other alternative embodiments, the location of the boundary of the weighing area is calibrated based on the invariance of the location of image capture device 102. It should be appreciated that image capture device 102 is fixedly mounted, and the relative positions of image capture device 102 and the weighing area are fixed, such that the coordinate position of the weighing area in the captured image data is invariant. Based on this, the boundary of the weighing area may be calibrated in advance according to the image data captured by the image capturing device 102, which will be described as an example below.
Alternatively, the image capture device 102 may capture the weighing area, obtain a third image, and send the third image to a terminal device (e.g., a display screen, a computer, a mobile phone, a tablet computer, etc.). The terminal device can display the third image so that the user can mark the weighing area on the third image. According to the calibration operation of the user on the third image, the terminal device may determine the position of the boundary of the weighing area, and may send the position of the boundary to the server 103 for storage for subsequent use.
Optionally, in this embodiment, when the image capturing apparatus 102 includes the first image capturing apparatus 1021 and the second image capturing apparatus 1022, the third image captured by the first image capturing apparatus 1021 and the third image captured by the second image capturing apparatus 1022 may be calibrated respectively, so as to obtain calibration results of the boundary of the weighing area at different viewing angles. For convenience of description and distinction, the calibration result obtained from the third image captured by the first image capturing device 1021 may be described as a first calibration result, and the calibration result obtained from the third image captured by the second image capturing device 1022 may be described as a second calibration result.
An alternative embodiment in which the server 103 analyzes the weighing behavior of the vehicle according to the position coordinates of the vehicle and the boundary of the weighing area will be described below with reference to the first image and the second image captured from different perspectives.
It should be appreciated that when a vehicle is weighing a pound, the vehicle travels into the weighing area. When the vehicle is fully weighted, the rear wheels of the vehicle should be located inside the boundaries of the weighing area. Based on the above, whether the vehicle is completely weighed can be judged according to the relative positions of the detection frame of the tail of the vehicle and the weighing boundary of the weighing area. As will be described in detail below.
Optionally, the first coordinate at the lower left corner of the vehicle tail is determined according to a first detection frame marked on the first image. Next, a first line segment intersecting the first detection box through the first coordinate is determined, the first line segment being parallel to the upper pound boundary of the weighing area. That is, a line segment is drawn through the first coordinate and parallel to the upper pound boundary of the weighing area. The upper-pound boundary refers to the boundary which is firstly contacted by the front wheel when the vehicle enters the weighing area. Next, a relative position between the midpoint of the first line segment and the upper pound boundary of the weighing area is determined. Determining that the vehicle has fully weighed if the midpoint of the first line segment is above the top-pound boundary of the weighing area; if the midpoint of the first line segment is below the weighing area, it is determined that the vehicle is not weighing fully.
Generally, the lower left corner of the vehicle tail corresponding to the first coordinate is the contact position of the rear left wheel and the ground. The first line may be considered approximately as the line connecting the left rear wheel and the right rear wheel. Then, when the midpoint of the line connecting the left rear wheel and the right rear wheel is above the upper pound boundary of the weighing area, it is considered that both the left rear wheel and the right rear wheel have entered the weighing area. If the midpoint of the line connecting the left rear wheel and the right rear wheel is below the upper pound boundary of the weighing area, it may be assumed that the left rear wheel and/or the right rear wheel are not driven into the weighing area, i.e., the vehicle is not weighing completely.
Taking fig. 4 as an example, the detection frame of the vehicle is (x0, y0, x1, y1), and the first coordinate is the coordinate of a' in fig. 4. In fig. 4, AB is the upper pound boundary of the weighing area. And drawing a straight line passing through the point A ', and intersecting the straight line with the detection frame at the point B' to obtain a line segment A 'B'. The line segment A 'B' can be approximated as the position of the rear wheel when the vehicle is stopped. Next, the center point C ' of the line segment A ' B ' is obtained, and if C ' is above AB, the vehicle is deemed to be stopped in the weighing zone, and if C ' is below AB, the vehicle is deemed not to be stopped in the weighing zone.
When weighing a vehicle, in order to ensure the correctness of weighing data, the vehicle should be prevented from moving out of the weighing area in advance. That is, when weighing a weighing scale, the front wheels of the vehicle should be located inside the boundaries of the weighing scale zone. Based on the method, whether the vehicle is in weighing in advance can be judged according to the relative position of the detection frame of the vehicle head and the weighing area. As will be described in detail below.
Optionally, a second coordinate located at the right lower corner of the vehicle head can be determined according to a second detection frame marked on the second image; then, determining a second line segment passing through a second coordinate, wherein the second line segment is parallel to a lower pound boundary of the weighing area; that is, a line segment is drawn through the second coordinate and parallel to the lower pound boundary of the weighing area. The lower-pound boundary refers to the boundary which is first contacted by the front wheel when the vehicle runs out of the weighing area. Then, determining the relative position between the middle point of the second line segment and the lower pound boundary of the weighing area; if the midpoint of the second line segment is above the lower pound boundary of the weighing area, determining that the vehicle is not weighing; if the midpoint of the second line segment is below the lower pound boundary of the weighing area, it is determined that the vehicle has weighed down.
Usually, the lower right corner of the vehicle head corresponding to the second coordinate is the contact position of the front left wheel and the ground. The second line segment can be approximately considered as the connection line of the left front wheel and the right front wheel. Then, when the midpoint of the line connecting the front left wheel and the front right wheel is above the lower pound boundary of the weighing area, it is considered that neither the front left wheel nor the front right wheel has exited the weighing area. If the midpoint of the line connecting the front left wheel and the front right wheel is below the lower pound boundary of the weighing zone, it may be assumed that the front left wheel and/or the front right wheel has traveled out of the weighing zone, i.e., the vehicle has weighed down.
It should be understood that "left and right" on the first image and the second image in the present embodiment are relative to the observer's angle of view (shooting angle of view). The left, right, front and rear of the wheel are described by taking a driver in a cab as a reference object, and when the reference object is changed, the description can be adaptively changed along with the change, so that the description is not repeated.
Further optionally, the server 103 may obtain the operating status of the barrier device 104 when it is determined that the vehicle has been down pounds. If the vehicle is determined to have dropped a pound but the gateway device 104 has not opened the passage, the server 104 may determine that the vehicle has malicious card-breaking behavior.
Based on the above-mentioned embodiments, the server 103 can analyze whether the vehicle has been completely weighed up, has been discharged in advance, has been rushed into the card maliciously, and so on, according to the position coordinates of the vehicle and the boundary of the weighing area. If the vehicle is not completely weighed, or the vehicle is weighed in advance, or a card is rushed maliciously, abnormal weighing behavior can be considered to occur.
In some optional embodiments, after the server 103 acquires the detection frames of the vehicle marked in the image data, the weighing behavior of the vehicle may be further analyzed according to the number of the detection frames. Optionally, the server 103 may determine the number of vehicles within the boundary of the weighing area according to the first detection frame marked on the first image and/or the second detection frame marked on the second image; if there are multiple vehicles within the boundary of the weighing area, the server 103 may determine that the following vehicle has the following weighing behavior. In such an embodiment, the vehicle detection network may identify each vehicle and output detection frames for each vehicle, i.e., one detection frame for each vehicle. For any weighing operation, if a plurality of detection frames located in the weighing area exist in the detection frames marked in the image data by the vehicle detection network, the cheating behavior of weighing with the vehicle can be considered to exist.
In some optional embodiments, the neural network model further comprises: a vehicle classification network.
The image features may be input into the vehicle classification network after being output by the feature extraction network. In the vehicle classification network, the image features are calculated according to pre-trained model parameters and a classification algorithm to identify the type of the vehicle. The type of the vehicle may include: cars, trucks, automobiles, buses, trailers, and the like, different vehicles having different load carrying capacities that may be constrained by the rated load carrying capacity of the vehicle.
Next, the rated load of the vehicle may be determined according to the type of the vehicle. The correspondence relationship between the vehicle type and the rated load may be stored in the service end 103 in advance. After the service end 103 determines the type of the vehicle, it queries based on the above correspondence, and thus obtains the rated load of the vehicle. If the data of the weight detected by the weigh scale 101 is larger than the rated load corresponding to the type of the vehicle when the vehicle is over-weighed, it can be determined that the vehicle has overload behavior. And if the weight data of the vehicle is less than or equal to the rated load of the vehicle, determining that the vehicle does not have overload behavior.
Based on the above embodiments, the server 103 may analyze various weighing behaviors of the vehicle when weighing scales, including at least one of the following: whether the vehicle has the behavior of weighing for a plurality of times, whether the vehicle has the behavior of weighing with the vehicle, whether the vehicle has the behavior of weighing incompletely, whether the vehicle has the behavior of weighing in advance, whether the vehicle has the behavior of malicious card running, and whether the vehicle has the behavior of overloading.
If the vehicle has at least one of a multi-vehicle weighing behavior, a following weighing behavior, an incomplete weighing behavior, an early weighing behavior, a malicious card running behavior and an overload behavior, the server 103 may send a corresponding early warning signal to the early warning device 105 to prompt the driver to correct the weighing behavior through the early warning device 105 so as to regulate the weighing behavior.
It should be noted that, in some alternative embodiments, the server 103 may record the weighing behavior of the vehicle when weighing the vehicle while recording the identification and weight data of the vehicle, so as to obtain the details of the weighing of the vehicle.
Optionally, the server 103 may also send the recorded detail data to a terminal device of a driver corresponding to the vehicle for the driver to view. On one hand, the detailed data comprises the vehicle identification and the weighing behavior which are recognized according to the image data, and if the recognition result has deviation, the driver can make complaint modification through the terminal equipment. On the other hand, when the weighing behaviors contained in the detail data comprise cheating behaviors such as following and weighing, incomplete weighing, malicious card running and the like, a warning effect can be formed for the driver so as to standardize the subsequent weighing behaviors.
It should be noted that the neural network model used in the above embodiments can be obtained by training in advance using a large number of sample images. The sample image may include a picture of the vehicle taken at the upper weighing position of the weighing area and a picture of the vehicle taken at the lower weighing position of the vehicle. After the sample image is input into the neural network model, a vehicle detection network and a license plate detection network can be trained in advance. When the output results of the vehicle detection network and the license plate detection network are close to the actual labeling result of the sample image, the fact that the vehicle detection network and the license plate detection network can normally detect the license plate can be determined. At this time, the license plate number recognition network can be continuously trained according to the output result of the license plate detection network until the license plate number recognition network can accurately recognize the license plate number of the vehicle, and the details are not repeated.
Fig. 5a is a schematic flowchart of a data processing method according to an exemplary embodiment of the present application, and as shown in fig. 5a, the method includes:
step 501, image data and weight data of the vehicle when the vehicle is in the weighing area for weighing are obtained.
Step 502, identifying the identity of the vehicle and the weighing behavior of the vehicle according to the image data.
Step 503, correspondingly recording the identification of the vehicle and the weight data, and managing the weighing process of the vehicle in the weighing area according to the weighing behavior of the vehicle.
In some optional embodiments, the image data comprises: a first image captured at a top-weighing position of the weighing area and a second image captured at a bottom-weighing position of the weighing area; the first image includes a running state of a tail of the vehicle, and the second image includes a running state of a head of the vehicle.
In some alternative embodiments, one way of identifying the identity of the vehicle from the image data includes: inputting the image data into a neural network model; the neural network model comprises a feature extraction network, a license plate detection network and a license plate number identification network; in the feature extraction network, feature extraction is carried out on the image data to obtain image features; inputting the image characteristics into the license plate detection network so as to determine the position of a license plate from the image data according to the image characteristics; and according to the license plate position, determining local image characteristics corresponding to the license plate from the image characteristics, and inputting the local image characteristics into the license plate number identification network to identify the license plate number contained in the image data.
In some alternative embodiments, from the image data, a manner of identifying the weighing behavior of the vehicle includes: if the image data contains a plurality of different license plate numbers, determining that the vehicle has the behavior of weighing with the vehicle in the weighing area; and if the license plate number of the vehicle has a plurality of weighing records associated with the weighing area within a set time range, determining that the vehicle has a plurality of weighing behaviors.
In some optional embodiments, the neural network model further comprises: a vehicle detection network; accordingly, one way of identifying the weighing behavior of the vehicle from the image data includes: inputting the image features into the vehicle detection network to detect the position of the vehicle according to the image features, and labeling a detection frame of the vehicle in the image data according to the position of the vehicle; and analyzing the weighing behavior of the vehicle according to the detection frame of the vehicle and the boundary of the weighing area.
In some optional embodiments, further comprising: shooting the weighing area to obtain a third image; displaying the third image for a user to calibrate the weighing area on the third image; and determining the boundary of the weighing area according to the calibration operation of the user on the third image.
In some optional embodiments, one way of analyzing the weighing behavior of the vehicle based on the detection frame of the vehicle and the boundary of the weighing area comprises: determining a first coordinate at the lower left corner of the vehicle tail according to a first detection frame marked on the first image; determining a first line segment intersecting the first detection frame through the first coordinate, the first line segment being parallel to an upper pound boundary of the weighing area; determining a relative position between a midpoint of the first line segment and an upper pound boundary of the weighing area; determining that the vehicle has fully weighed if the midpoint of the first line segment is above the top-pound boundary of the weighing area; determining that the vehicle is not fully weighing if the midpoint of the first line segment is below the weighing area.
In some optional embodiments, one way of analyzing the weighing behavior of the vehicle based on the detection frame of the vehicle and the boundary of the weighing area comprises: determining a second coordinate positioned at the right lower corner of the vehicle head according to a second detection frame marked on the second image; determining a second line segment intersecting the second detection frame through the second coordinate, the second line segment being parallel to a lower pound boundary of the weighing area; determining a relative position between a midpoint of the second line segment and a lower pound boundary of the weighing area; determining that the vehicle is not weighing down if the midpoint of the second line segment is above the weighing down boundary of the weighing area; determining that the vehicle has pulled a pound if the midpoint of the second line segment is below a lower pound boundary of the weighing area.
Further, if the vehicle is determined to be under the pound and the passage channel is not opened by the barrier equipment in the weighing area, the vehicle is determined to have malicious card running behavior.
In some optional embodiments, one way of analyzing the weighing behavior of the vehicle based on the detection frame of the vehicle and the boundary of the weighing area comprises: determining the number of vehicles in the boundary of the weighing area according to a first detection frame marked on the first image and/or a second detection frame marked on the second image; and determining that the vehicle has a vehicle following and weighing behavior if a plurality of vehicles exist in the boundary of the weighing area.
In some optional embodiments, the neural network model further comprises: a vehicle classification network; accordingly, one way of identifying the weighing behavior of the vehicle from the image data includes: inputting the image features into the vehicle classification network to identify a type of the vehicle based on the image features; determining the rated load of the vehicle according to the type of the vehicle; if the weight data is larger than the rated load, determining that the vehicle has overload behavior; and if the weight data of the vehicle is less than or equal to the rated load of the vehicle, determining that the vehicle does not have overload behavior.
In some optional embodiments, a manner of managing a weighing process of the vehicle in the weighing area based on the weighing behavior of the vehicle includes: and if the weighing behavior of the vehicle meets the set weighing requirement, sending an opening instruction to the barrier gate equipment in the weighing area to open a weighing channel.
In some optional embodiments, a manner of managing a weighing process of the vehicle in the weighing area based on the weighing behavior of the vehicle includes: and if the weighing behavior of the vehicle does not meet the set weighing requirement, sending an alarm instruction to an alarm device in the weighing area so that the alarm instruction sends out an early warning prompt signal.
In this embodiment, based on the image data of the vehicle when it is over-weighing, the identification of the vehicle can be identified and the over-weighing behavior of the vehicle can be analyzed. Based on the identification of the recognized vehicle, weight data of the vehicle can be automatically recorded, and dependence on manual entry of weighing data is reduced. Meanwhile, when the weighing behaviors of the vehicle are analyzed, abnormal cheating behaviors of the weighing vehicle can be found in time, so that the weighing process of the vehicle is intelligently managed, and the reliability of an autonomous weighing mode is improved.
The weighing detection method described in the above embodiments can be extended to a data processing method, which can be applied to a variety of different scenarios to implement automatic processing of task flows in a variety of different scenarios. The data processing method comprises the following steps:
and S1, acquiring image data and execution results of the target task executed by the mobile object in the designated area.
And S2, extracting the features of the image data, and identifying the position and the identification of the moving object according to the extracted image features.
And S3, identifying the behavior characteristics of the mobile object when executing the target task according to the position of the boundary of the designated area and the position of the mobile object.
And S4, correspondingly recording the execution result and the identification of the mobile object, and managing the processing flow associated with the target task according to the behavior characteristics.
The moving object may be a vehicle, including a motor vehicle and a non-motor vehicle, and may further include a robot that is movable by itself, such as a cleaning robot, a freight robot, a delivery robot, and the like. The designated area includes an area defined to facilitate the mobile object to execute a target task, which may include a certain job task or a specific operation.
When the implementation forms of the moving objects are different or the types of the target tasks are different, the implementation forms of the designated areas are also different. For example, for a vehicle, the designated area may be a parking area defined by a parking lot, where a user may drive the vehicle to perform a parking operation; alternatively, the weighing system can be a weighing area defined by the weighing monitoring station, and a user can drive the vehicle to perform weighing operation of the vehicle in the weighing area. For example, for a freight robot, the designated area may be a designated goods placement area, and the freight robot may transport goods to the area and perform a placement operation of the goods; or, it may be a designated automatic goods collecting area where the freight robot can perform an operation for collecting goods. For another example, for a cleaning robot, the designated area may be a designated cleaning area where the cleaning robot can perform a designated cleaning task. The above-mentioned specific areas and the implementation forms of the target tasks are only used for exemplary illustration, and the embodiments of the present application include but are not limited thereto.
The image acquisition device is installed near or above the designated area, and the designated area and the area near the designated area are located in the shooting field range of the image acquisition device, so that when the moving object executes a target task in the designated area, the image acquisition device can shoot the moving object to acquire image data. The image data may be a video stream obtained by continuous shooting or an image sequence obtained by interval shooting, which is not limited in this embodiment.
The execution result can be detected by a sensor deployed in a specified area, and the type of the sensor corresponds to the type of the target task. For example, the result of execution of a parking operation of the vehicle in the parking area may be detected by a radar sensor installed in the parking area. For example, the result of performing a weighing operation of the vehicle in the weighing area may be detected by a weight sensor installed in the weighing area. For example, the result of the cargo collecting operation of the cargo robot in the automatic cargo collecting area can be detected by a weight sensor or a visual sensor mounted on the cargo robot, and is not described in detail.
After the image data is acquired, the image data can be input into a neural network model, feature extraction is carried out on the image data in the neural network model, and the position and the identification of the moving object are identified according to the extracted image features. The neural network model may be a convolutional neural network model (CNN) or other artificial neural network models that can be used for target detection and classification, which is not limited in this embodiment.
The position of the boundary of the designated area can be obtained by pre-calibration. Because the image acquisition device is fixedly arranged near the designated area or above the designated area, the position of the boundary of the designated area is unchanged in the image shot by the image acquisition equipment. Based on the method, the boundary of the designated area can be calibrated in the image shot by the image acquisition equipment, so that the position of the boundary of the designated area can be rapidly and accurately obtained.
Identifying the behavior characteristics of the mobile object when executing the target task according to the position of the boundary of the designated area and the position of the mobile object may include: identifying whether the moving object is within the boundary of the designated area according to the position of the boundary of the designated area and the position of the moving object; identifying whether the moving object does not completely enter the designated area or not according to the position of the boundary of the designated area and the position of the moving object; and identifying whether the moving object moves out of the designated area according to the position of the boundary of the designated area and the position of the moving object. Based on the above, it can be determined whether or not the behavior of the mobile object at the time of executing the specified task is compliant.
It should be understood that when the behavior of the mobile object is in compliance, the subsequent processing flow under the compliance condition can be executed; when the behavior of the mobile object is not compliant, the subsequent processing flow under the non-compliant condition can be executed. The specific implementation form of the subsequent processing flow is associated with the target task, and this embodiment is not limited. Alternative embodiments of identifying behavior characteristics and performing subsequent processing flows based on the behavior characteristics are illustrated with reference to different examples below.
For example, when the vehicle performs a parking operation in a designated area, the position of the vehicle may be identified based on the image data, and whether the parking behavior of the vehicle is normative may be determined based on the position of the boundary of the parking area. And if the position of the vehicle is within the boundary of the parking area, the parking behavior of the vehicle is considered to meet the requirements of the parking regulations. At this time, a parking space marking process may be performed to mark that the parking space is used. And if the tail of the vehicle does not completely drive into the parking area or the head of the vehicle exceeds the parking area, the parking behavior of the vehicle is considered not to meet the requirements of parking specifications. At this time, a voice prompt process or an alarm process may be performed to prompt the driver to adjust the vehicle position.
For example, when the vehicle performs a weighing operation of the vehicle in the weighing area, the position of the vehicle can be identified according to the image data, and whether the weighing behavior of the vehicle is normal or not can be judged according to the position of the boundary of the weighing area. If the location of the vehicle is within the boundaries of the weighing area, the weighing behavior of the vehicle is deemed to meet the weighing specification requirements. At this time, a next pound process can be executed, and the barrier gate is opened to pass the vehicle. If the tail of the vehicle does not completely drive into the weighing area, the vehicle is considered not to be weighed completely, and the weighing cheating behavior occurs; if the locomotive exceeds the weighing area, the vehicle is considered to have cheating behaviors such as weighing in advance or malicious card rushing. When the cheating behavior is identified, a voice prompt process or an alarm process can be executed.
For example, when the freight robot collects goods in the automatic goods collecting area, the position of the freight robot can be identified according to the image data, and whether the goods collecting behavior of the freight robot is standard or not can be judged according to the position of the boundary of the automatic goods collecting area. And if the position of the freight robot is within the boundary of the automatic goods collecting area, the goods collecting behavior of the freight robot is considered to meet the requirements of the collecting part specification. At this time, a return command may be sent to the freight robot so that the freight robot sends the collected goods to a designated location. If the freight robot does not completely enter the automatic goods collecting area or moves out of the automatic goods collecting area, the goods collecting behavior of the freight robot can be considered to not meet the requirements of the collecting part specification. At this time, a position adjustment instruction may be transmitted to the freight robot to prompt the freight robot to adjust the position.
Based on the embodiment, the process of executing the target task by the mobile object is automatically monitored, and the abnormal cheating behavior of the mobile object during the execution of the target task can be found in time, so that the processing flow associated with the target task is intelligently managed.
The weighing detection method described in the above embodiments can be extended to a vehicle inspection method that can automatically identify a vehicle performing an inspection task and supervise a vehicle inspection process. The vehicle inspection method may include:
and S11, acquiring image data and a checking result when the vehicle is checked in the first checking area.
And S12, identifying the identification of the vehicle and the checking behavior of the vehicle according to the image data.
And S13, correspondingly recording the identification of the vehicle and the inspection result, and managing the inspection process of the vehicle according to the inspection behavior of the vehicle.
Typically, a vehicle inspection site contains multiple inspection areas, and different inspection areas may be used to inspect different items. For example, the vehicle detection site may include a vehicle registration area, a vehicle appearance detection area, and a vehicle presence detection area. The on-line detection area of the vehicle can comprise a brake detection area, a light detection area, a chassis detection area, a hand brake detection area and the like.
The first detection area in this embodiment may be implemented as any one of the detection areas, and is defined as "first" herein for convenience of description and distinction, and does not represent the position sequence of the detection areas.
In this embodiment, the check result may be entered by the user, the entry manner may be to manually enter text information, to enter voice information, to enter image information, or the like, and this embodiment is not limited. If the verification result input by the user is voice information, the voice information can be recognized as character information. If the detection result entered by the user is image information, object recognition can be performed based on the image information, for example, recognition of problems such as deformation, dent, and breakage of the vehicle body displayed by the image can be performed.
Wherein the image data can be captured by a camera which can be mounted near or above the detection area. For an alternative implementation of identifying the vehicle identifier according to the image data, reference may be made to the descriptions of the foregoing embodiments, and details are not repeated here.
When the identification and the inspection result of the vehicle are correspondingly recorded, the inspection item corresponding to the first inspection area can be determined according to the corresponding relation between the inspection area and the inspection item, and is used as the inspection item of the vehicle in the current inspection link, and then, the inspection result and the inspection item of the vehicle are correspondingly recorded, so that the inspection record of the current inspection link is generated.
The operation of identifying the checking behavior of the vehicle from the image data may be implemented based on a neural network model. The neural network model can be obtained by training image data of the vehicle in each detection link when different detection items are executed in advance. Iterative training is performed through a large number of sample images, so that the neural network model can learn how to classify the inspection items. Based on this, in this embodiment, the image data may be input into the neural network model, feature extraction may be performed on the image data in the neural network model to obtain image features, and the actual inspection items of the vehicle may be identified according to the image features.
It should be understood that the inspection item corresponding to the first inspection area is known, and thus, if the actual inspection item of the vehicle identified from the image data is different from the inspection item corresponding to the first inspection area, it may be determined that a cheating action exists in the current inspection link. For example, the vehicle appearance detection area includes items: the detection system comprises a light detection item, a vehicle body detection item, a tire detection item and a skylight detection item. If the actual inspection items of the identified vehicles in the appearance inspection area do not accord with the items or part of the items are omitted, the fact that the cheating behavior exists in the current appearance inspection link can be determined.
According to the vehicle inspection behavior, when the vehicle inspection flow is managed, if the vehicle has cheating behavior, the current vehicle inspection link can be prompted to be unqualified, and the current vehicle inspection link needs to be executed again; alternatively, an alert message may be sent to a designated administrator to artificially constrain the inspection process. If the vehicle does not have cheating behaviors, the vehicle can be prompted to enter the next inspection link or the inspection process is finished.
In the embodiment, in the vehicle inspection process, the identification of the vehicle can be identified and the inspection behavior of the vehicle can be analyzed based on the image data, and the inspection result of the vehicle can be automatically recorded based on the identified identification of the vehicle, so that the dependence on manual operation is reduced. Meanwhile, based on the operation of analyzing the inspection behaviors of the vehicle, the abnormal cheating behaviors of the vehicle can be found in time, so that the inspection process of the vehicle is monitored intelligently.
It should be noted that the execution subjects of the steps of the methods provided in the above embodiments may be the same device, or different devices may be used as the execution subjects of the methods. For example, the execution subjects of steps 501 to 503 may be device a; for another example, the execution subjects of steps 501 and 502 may be device a, and the execution subject of step 503 may be device B; and so on.
The weighing detection method provided by the embodiment of the application can be suitable for various scenes, such as a cement truck weighing scene, a coal transport vehicle weighing scene, a chemical transport vehicle weighing scene, a sand transport vehicle weighing scene, a crop transport vehicle weighing scene, an automatic weighing scene of a highway toll station and the like. Based on this application embodiment, can realize unmanned on duty intelligence weighing operation under multiple scene, reduce the human cost, promote the efficiency of weighing.
This will be exemplified below by way of example in fig. 5 b. In fig. 5b, for example, when the engineering truck passes through a pound scale, an image acquisition operation can be performed to acquire image data of the engineering truck passing through the pound scale. The image data may then be analyzed by the server device. In the analysis process, the position of the engineering vehicle can be detected, and vehicle weighing behavior analysis can be executed based on the detected vehicle position and a pre-calibrated wagon balance boundary. Wherein, the analysis result of the vehicle weighing behavior may include: whether only one vehicle is in the exact location of the wagon balance; whether the vehicle is not completely weighed, whether the vehicle is weighed for multiple times, whether the vehicle is followed and weighed, whether the vehicle is maliciously punched and the like are cheated. If at least one cheating problem exists, the server can give an alarm in time through other terminal equipment so as to restrict the weighing behavior. Meanwhile, as shown in fig. 5b, the server may perform a license plate location operation based on the image data to detect a location of a license plate of the engineering vehicle, and perform a license plate recognition operation according to the detected location of the license plate. Based on the identified license plate number, the server can record the weighing related data of the engineering vehicle, and the weighing related data are not repeated one by one.
In addition, in some of the flows described in the above embodiments and the drawings, a plurality of operations are included in a specific order, but it should be clearly understood that the operations may be executed out of the order presented herein or in parallel, and the sequence numbers of the operations, such as 501, 502, etc., are merely used for distinguishing different operations, and the sequence numbers themselves do not represent any execution order. Additionally, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel.
It should be noted that, the descriptions of "first", "second", etc. in this document are used for distinguishing different messages, devices, modules, etc., and do not represent a sequential order, nor limit the types of "first" and "second" to be different.
Fig. 6 is a schematic structural diagram of an electronic device according to an exemplary embodiment of the present application, and as shown in fig. 6, the electronic device includes: a memory 601 and a processor 602.
The memory 601 is used for storing computer programs and may be configured to store other various data to support operations on the electronic device. Examples of such data include instructions for any application or method operating on the electronic device, contact data, phonebook data, messages, pictures, videos, and so forth.
The memory 601 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
A processor 602, coupled to the memory 601, for executing the computer programs in the memory 601 to: acquiring image data and weight data of a vehicle when the vehicle is subjected to weighing in a weighing area; identifying an identity of the vehicle and a weighing behavior of the vehicle from the image data; correspondingly recording the identification of the vehicle and the weight data, and managing the weighing process of the vehicle in the weighing area according to the weighing behavior of the vehicle.
Further optionally, the image data includes: a first image captured at a top-weighing position of the weighing area and a second image captured at a bottom-weighing position of the weighing area; the first image includes a running state of a tail of the vehicle, and the second image includes a running state of a head of the vehicle.
Further optionally, when the identifier of the vehicle is identified according to the image data, the processor 602 is specifically configured to: inputting the image data into a neural network model; the neural network model comprises a feature extraction network, a license plate detection network and a license plate number identification network; in the feature extraction network, feature extraction is carried out on the image data to obtain image features; inputting the image characteristics into the license plate detection network so as to determine the position of a license plate from the image data according to the image characteristics; and according to the license plate position, determining local image characteristics corresponding to the license plate from the image characteristics, and inputting the local image characteristics into the license plate number identification network to identify the license plate number contained in the image data.
Further optionally, when identifying the weighing behavior of the vehicle according to the image data, the processor 602 is specifically configured to: if the image data contains a plurality of different license plate numbers, determining that the vehicle has the behavior of weighing with the vehicle in the weighing area; and if the license plate number of the vehicle has a plurality of weighing records associated with the weighing area within a set time range, determining that the vehicle has a plurality of weighing behaviors.
Further optionally, the neural network model further includes: a vehicle detection network; accordingly, when identifying the weighing behavior of the vehicle based on the image data, the processor 602 is specifically configured to: inputting the image features into the vehicle detection network to detect the position of the vehicle according to the image features, and labeling a detection frame of the vehicle in the image data according to the position of the vehicle; and analyzing the weighing behavior of the vehicle according to the detection frame of the vehicle and the boundary of the weighing area.
Further optionally, the processor 602 is further configured to: shooting the weighing area to obtain a third image; displaying the third image for a user to calibrate the weighing area on the third image; and determining the boundary of the weighing area according to the calibration operation of the user on the third image.
Further optionally, when analyzing the weighing behavior of the vehicle according to the detection frame of the vehicle and the boundary of the weighing area, the processor 602 is specifically configured to: determining a first coordinate at the lower left corner of the vehicle tail according to a first detection frame marked on the first image; determining a first line segment intersecting the first detection frame through the first coordinate, the first line segment being parallel to an upper pound boundary of the weighing area; determining a relative position between a midpoint of the first line segment and an upper pound boundary of the weighing area; determining that the vehicle has fully weighed if the midpoint of the first line segment is above the top-pound boundary of the weighing area; determining that the vehicle is not fully weighing if the midpoint of the first line segment is below the weighing area.
Further optionally, when analyzing the weighing behavior of the vehicle according to the detection frame of the vehicle and the boundary of the weighing area, the processor 602 is specifically configured to: determining a second coordinate positioned at the right lower corner of the vehicle head according to a second detection frame marked on the second image; determining a second line segment intersecting the second detection frame through the second coordinate, the second line segment being parallel to a lower pound boundary of the weighing area; determining a relative position between a midpoint of the second line segment and a lower pound boundary of the weighing area; determining that the vehicle is not weighing down if the midpoint of the second line segment is above the weighing down boundary of the weighing area; determining that the vehicle has pulled a pound if the midpoint of the second line segment is below a lower pound boundary of the weighing area.
Further, if the processor 602 determines that the vehicle has been under a pound and the barrier equipment in the weighing area has not opened a passage, it is determined that the vehicle has malicious card-violation behavior.
Further optionally, when analyzing the weighing behavior of the vehicle according to the detection frame of the vehicle and the boundary of the weighing area, the processor 602 is specifically configured to: determining the number of vehicles in the boundary of the weighing area according to a first detection frame marked on the first image and/or a second detection frame marked on the second image; and determining that the vehicle has a vehicle following and weighing behavior if a plurality of vehicles exist in the boundary of the weighing area.
Further optionally, the neural network model further includes: a vehicle classification network; accordingly, when identifying the weighing behavior of the vehicle based on the image data, the processor 602 is specifically configured to: inputting the image features into the vehicle classification network to identify a type of the vehicle based on the image features; determining the rated load of the vehicle according to the type of the vehicle; if the weight data is larger than the rated load, determining that the vehicle has overload behavior; and if the weight data of the vehicle is less than or equal to the rated load of the vehicle, determining that the vehicle does not have overload behavior.
Further optionally, when managing the weighing process of the vehicle in the weighing area according to the weighing behavior of the vehicle, the processor 602 is specifically configured to: and if the weighing behavior of the vehicle meets the set weighing requirement, sending an opening instruction to the barrier gate equipment in the weighing area to open a weighing channel.
Further optionally, when managing the weighing process of the vehicle in the weighing area according to the weighing behavior of the vehicle, the processor 602 is specifically configured to: and if the weighing behavior of the vehicle does not meet the set weighing requirement, sending an alarm instruction to an alarm device in the weighing area so that the alarm instruction sends out an early warning prompt signal.
Further, as shown in fig. 6, the electronic device further includes: communication component 603, display component 604, power component 605, audio component 606, and the like. Only some of the components are schematically shown in fig. 6, and the electronic device is not meant to include only the components shown in fig. 6.
Wherein the communication component 603 is configured to facilitate communication between the device in which the communication component is located and other devices in a wired or wireless manner. The device in which the communication component is located may access a wireless network based on a communication standard, such as WiFi, 2G, 3G, 4G, or 5G, or a combination thereof. In an exemplary embodiment, the communication component receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component may be implemented based on Near Field Communication (NFC) technology, Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
The display assembly 604 includes a screen, which may include a liquid crystal display assembly (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation.
The power supply 605 provides power to various components of the device in which the power supply is located. The power components may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the device in which the power component is located.
In the embodiment, the weighing process of the vehicle is shot, the shot image data are sent to the server side, and the server side can identify the identification of the vehicle and analyze the weighing behavior of the vehicle based on the image data. Based on the identified identification of the vehicle, the service end can automatically record the weight data of the vehicle, and the dependence on the operation of manually inputting weighing data is reduced. Meanwhile, when the weighing behaviors of the vehicle are analyzed, the server can timely discover the abnormal cheating behaviors of the weighing vehicle, so that the weighing process of the vehicle is intelligently managed, and the reliability of an autonomous weighing mode is improved.
It is worth noting that the electronic device illustrated in fig. 6 may also execute the following data processing logic: the processor 602 acquires image data and execution results of the mobile object when executing the target task in the designated area through the communication component 603; extracting the features of the image data, and identifying the position and the identification of the moving object according to the extracted image features; according to the position of the boundary of the designated area and the position of the mobile object, identifying behavior characteristics of the mobile object when the mobile object executes the target task; and correspondingly recording the execution result and the identification of the mobile object, and managing the processing flow associated with the target task according to the behavior characteristics.
It is further worth noting that the electronic device illustrated in fig. 6 may also execute the following data processing logic: the processor 602 acquires image data of the vehicle when the vehicle is inspected in the first inspection area and an inspection result through the communication component 603; identifying an identity of the vehicle and a verification behavior of the vehicle from the image data; and correspondingly recording the identification of the vehicle and the inspection result, and managing the inspection process of the vehicle according to the inspection behavior of the vehicle.
Further optionally, when the identifier of the vehicle and the inspection result are recorded correspondingly, the processor 602 is specifically configured to: determining a checking item corresponding to the first checking area according to the corresponding relation between the checking area and the checking item, and taking the checking item as the checking item of the vehicle in the current checking link; and correspondingly recording the inspection result and the inspection item of the vehicle to generate an inspection record of the current inspection link.
Further optionally, when identifying the checking behavior of the vehicle according to the image data, the processor 602 is specifically configured to: inputting the image data into a neural network model; in the neural network model, performing feature extraction on the image data to obtain image features; identifying an actual inspection item of the vehicle according to the image characteristics; and if the actual inspection item of the vehicle is different from the inspection item corresponding to the first inspection area, determining that the cheating action exists in the current inspection link.
Accordingly, the present application further provides a computer-readable storage medium storing a computer program, where the computer program is capable of implementing the steps that can be executed by the electronic device in the foregoing method embodiments when executed.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (22)

1. A self-service weighing system, comprising:
the system comprises a weighbridge scale, image acquisition equipment and a server side;
the weighbridge scale is arranged in the weighing area and used for detecting the weight of a weighed vehicle and sending detected weight data to the server;
the image acquisition equipment is arranged above the weighing area and used for shooting image data when the vehicle is weighing and sending the image data to the server;
the server is used for: identifying an identity of the vehicle and a weighing behavior of the vehicle from the image data; correspondingly recording the identification of the vehicle and the weight data, and managing the weighing process of the vehicle in the weighing area according to the weighing behavior of the vehicle.
2. The system of claim 1, wherein the image capture device comprises:
a first image capture device mounted above a weighing position of the weighing area for: shooting the running state of the tail part of the vehicle at the weighing position; and the number of the first and second groups,
a second image capturing device mounted above a lower pound position of the weighing area for: the driving state of the vehicle head is photographed at the weighing down position.
3. The system of claim 1 or 2, further comprising: an alert device;
the server is used for: upon determining that the weighing behavior of the vehicle does not meet the set weighing requirement, sending an alert instruction to the alert device;
the alert device to: and sending out an early warning prompt signal according to the alarm instruction so as to prompt the vehicle to correct the weighing behavior.
4. The system of claim 1 or 2, further comprising: barrier equipment;
the server is specifically configured to: when the weighing behavior of the vehicle is determined to meet the set weighing requirement, sending an opening instruction to the barrier gate equipment;
the barrier gate device is specifically configured to: and opening a weighing channel according to the opening instruction sent by the server.
5. A data processing method, comprising:
acquiring image data and an execution result of a mobile object in a designated area when executing a target task;
extracting the features of the image data, and identifying the position and the identification of the moving object according to the extracted image features;
identifying behavior characteristics of the mobile object when the mobile object executes the target task according to the position of the boundary of the designated area and the position of the mobile object;
and correspondingly recording the execution result and the identification of the mobile object, and managing the processing flow associated with the target task according to the behavior characteristics.
6. A method of detecting weighing, comprising:
acquiring image data and weight data of a vehicle when the vehicle is subjected to weighing in a weighing area;
identifying an identity of the vehicle and a weighing behavior of the vehicle from the image data;
correspondingly recording the identification of the vehicle and the weight data, and managing the weighing process of the vehicle in the weighing area according to the weighing behavior of the vehicle.
7. The method of claim 6, wherein the image data comprises:
a first image captured at a top-weighing position of the weighing area and a second image captured at a bottom-weighing position of the weighing area; wherein the first image includes a running state of a tail of the vehicle, and the second image includes a running state of a head of the vehicle.
8. The method of claim 7, wherein identifying the identity of the vehicle from the image data comprises:
inputting the image data into a neural network model; the neural network model comprises a feature extraction network, a license plate detection network and a license plate number identification network;
in the feature extraction network, feature extraction is carried out on the image data to obtain image features;
inputting the image characteristics into the license plate detection network so as to determine the position of a license plate from the image data according to the image characteristics;
and according to the license plate position, determining local image characteristics corresponding to the license plate from the image characteristics, and inputting the local image characteristics into the license plate number identification network to identify the license plate number contained in the image data.
9. The method of claim 8, wherein identifying the weighing behavior of the vehicle from the image data comprises:
if the image data contains a plurality of different license plate numbers, determining that the vehicle has the behavior of weighing with the vehicle in the weighing area;
and if the license plate number of the vehicle has a plurality of weighing records associated with the weighing area within a set time range, determining that the vehicle has a plurality of weighing behaviors.
10. The method of claim 8, wherein the neural network model further comprises: a vehicle detection network;
identifying, from the image data, a weighing behavior of the vehicle, including:
inputting the image features into the vehicle detection network to detect the position of the vehicle according to the image features, and labeling a detection frame of the vehicle in the image data according to the position of the vehicle;
and analyzing the weighing behavior of the vehicle according to the detection frame of the vehicle and the boundary of the weighing area.
11. The method of claim 10, further comprising:
shooting the weighing area to obtain a third image;
displaying the third image for a user to calibrate the weighing area on the third image;
and determining the boundary of the weighing area according to the calibration operation of the user on the third image.
12. The method of claim 10, wherein analyzing the weighing behavior of the vehicle based on the detection frame of the vehicle and the boundary of the weighing area comprises:
determining a first coordinate at the lower left corner of the vehicle tail according to a first detection frame marked on the first image;
determining a first line segment intersecting the first detection frame through the first coordinate, the first line segment being parallel to an upper pound boundary of the weighing area;
determining a relative position between a midpoint of the first line segment and an upper pound boundary of the weighing area;
determining that the vehicle has fully weighed if the midpoint of the first line segment is above the top-pound boundary of the weighing area;
determining that the vehicle is not fully weighing if the midpoint of the first line segment is below the weighing area.
13. The method of claim 10, wherein analyzing the weighing behavior of the vehicle based on the detection frame of the vehicle and the boundary of the weighing area comprises:
determining a second coordinate positioned at the right lower corner of the vehicle head according to a second detection frame marked on the second image;
determining a second line segment intersecting the second detection frame through the second coordinate, the second line segment being parallel to a lower pound boundary of the weighing area;
determining a relative position between a midpoint of the second line segment and a lower pound boundary of the weighing area;
determining that the vehicle is not weighing down if the midpoint of the second line segment is above the weighing down boundary of the weighing area;
determining that the vehicle has pulled a pound if the midpoint of the second line segment is below a lower pound boundary of the weighing area.
14. The method of claim 13, further comprising:
and if the vehicle is determined to be under the weighing condition and the pass-through passage is not opened by the barrier gate equipment in the weighing area, determining that the vehicle has malicious card running behaviors.
15. The method of claim 10, wherein analyzing the weighing behavior of the vehicle based on the detection frame of the vehicle and the boundary of the weighing area comprises:
determining the number of vehicles in the boundary of the weighing area according to a first detection frame marked on the first image and/or a second detection frame marked on the second image;
and determining that the vehicle has a vehicle following and weighing behavior if a plurality of vehicles exist in the boundary of the weighing area.
16. The method of claim 8, wherein the neural network model further comprises: a vehicle classification network;
identifying, from the image data, a weighing behavior of the vehicle, including:
inputting the image features into the vehicle classification network to identify a type of the vehicle based on the image features;
determining the rated load of the vehicle according to the type of the vehicle;
if the weight data is larger than the rated load, determining that the vehicle has overload behavior;
and if the weight data of the vehicle is less than or equal to the rated load of the vehicle, determining that the vehicle does not have overload behavior.
17. The method of any one of claims 6-16, wherein managing a weighing process of the vehicle in the weighing area based on the weighing behavior of the vehicle comprises:
if the weighing behavior of the vehicle meets the set weighing requirement, sending an opening instruction to the barrier gate equipment in the weighing area to open a weighing channel;
and if the weighing behavior of the vehicle does not meet the set weighing requirement, sending an alarm instruction to an alarm device in the weighing area so that the alarm instruction sends out an early warning prompt signal.
18. A vehicle inspection method, comprising:
acquiring image data and a detection result of a vehicle in a first detection area during vehicle detection;
identifying an identity of the vehicle and a verification behavior of the vehicle from the image data;
and correspondingly recording the identification of the vehicle and the inspection result, and managing the inspection process of the vehicle according to the inspection behavior of the vehicle.
19. The method of claim 18, wherein correspondingly recording the identification of the vehicle and the inspection result comprises:
determining a checking item corresponding to the first checking area according to the corresponding relation between the checking area and the checking item, and taking the checking item as the checking item of the vehicle in the current checking link;
and correspondingly recording the inspection result and the inspection item of the vehicle to generate an inspection record of the current inspection link.
20. The method of claim 18 or 19, wherein identifying the inspection behavior of the vehicle from the image data comprises:
inputting the image data into a neural network model;
in the neural network model, performing feature extraction on the image data to obtain image features;
identifying an actual inspection item of the vehicle according to the image characteristics;
and if the actual inspection item of the vehicle is different from the inspection item corresponding to the first inspection area, determining that the cheating action exists in the current inspection link.
21. An electronic device, comprising: a memory and a processor;
the memory is to store one or more computer instructions;
the processor is to execute the one or more computer instructions to: performing the steps of the method of any one of claims 5-20.
22. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, is adapted to carry out the steps of the method of any one of claims 5 to 20.
CN202010626670.XA 2020-07-01 2020-07-01 Self-service weighing system, weighing detection method, weighing detection equipment and storage medium Active CN113515985B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010626670.XA CN113515985B (en) 2020-07-01 2020-07-01 Self-service weighing system, weighing detection method, weighing detection equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010626670.XA CN113515985B (en) 2020-07-01 2020-07-01 Self-service weighing system, weighing detection method, weighing detection equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113515985A true CN113515985A (en) 2021-10-19
CN113515985B CN113515985B (en) 2022-07-22

Family

ID=78060714

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010626670.XA Active CN113515985B (en) 2020-07-01 2020-07-01 Self-service weighing system, weighing detection method, weighing detection equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113515985B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113984175A (en) * 2021-10-26 2022-01-28 东北大学秦皇岛分校 Vehicle-mounted recalibration method based on artificial neural network and cloud service system
CN114360258A (en) * 2022-01-08 2022-04-15 中通服建设有限公司 Application method of cloud technology in superordinate control system
CN114863693A (en) * 2022-05-07 2022-08-05 无锡职业技术学院 Special vehicle monitoring method for expressway based on big data
CN114924477A (en) * 2022-05-26 2022-08-19 西南大学 Electric fish blocking and ship passing device based on image recognition and PID intelligent control
CN116126961A (en) * 2023-04-04 2023-05-16 河北中废通网络技术有限公司 Tamper-proof unattended weighing data system of regeneration circulation internet of things information system
CN117906728A (en) * 2024-03-20 2024-04-19 四川开物信息技术有限公司 Vehicle weighing system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102809413A (en) * 2012-08-06 2012-12-05 北京万集科技股份有限公司 Detecting method and detecting device for rigged dynamic weighing of vehicle
CN203163853U (en) * 2012-12-31 2013-08-28 日照港集团岚山港务有限公司 Automatic weighing management system
CN105241533A (en) * 2015-09-30 2016-01-13 湖北叶威(集团)智能科技有限公司 Automatic weighing metering system and method for granary
CN205812259U (en) * 2016-07-26 2016-12-14 岳阳市俊昇科贸有限公司 A kind of multi-faceted monitoring anti-cheating weighbridge system
CN111027555A (en) * 2018-10-09 2020-04-17 杭州海康威视数字技术股份有限公司 License plate recognition method and device and electronic equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102809413A (en) * 2012-08-06 2012-12-05 北京万集科技股份有限公司 Detecting method and detecting device for rigged dynamic weighing of vehicle
CN203163853U (en) * 2012-12-31 2013-08-28 日照港集团岚山港务有限公司 Automatic weighing management system
CN105241533A (en) * 2015-09-30 2016-01-13 湖北叶威(集团)智能科技有限公司 Automatic weighing metering system and method for granary
CN205812259U (en) * 2016-07-26 2016-12-14 岳阳市俊昇科贸有限公司 A kind of multi-faceted monitoring anti-cheating weighbridge system
CN111027555A (en) * 2018-10-09 2020-04-17 杭州海康威视数字技术股份有限公司 License plate recognition method and device and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
申世武: "柳钢矿石物流园物流一卡通***设计", 《金属世界》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113984175A (en) * 2021-10-26 2022-01-28 东北大学秦皇岛分校 Vehicle-mounted recalibration method based on artificial neural network and cloud service system
CN114360258A (en) * 2022-01-08 2022-04-15 中通服建设有限公司 Application method of cloud technology in superordinate control system
CN114863693A (en) * 2022-05-07 2022-08-05 无锡职业技术学院 Special vehicle monitoring method for expressway based on big data
CN114924477A (en) * 2022-05-26 2022-08-19 西南大学 Electric fish blocking and ship passing device based on image recognition and PID intelligent control
CN116126961A (en) * 2023-04-04 2023-05-16 河北中废通网络技术有限公司 Tamper-proof unattended weighing data system of regeneration circulation internet of things information system
CN116126961B (en) * 2023-04-04 2023-07-04 河北中废通网络技术有限公司 Tamper-proof unattended weighing data system of regeneration circulation internet of things information system
CN117906728A (en) * 2024-03-20 2024-04-19 四川开物信息技术有限公司 Vehicle weighing system
CN117906728B (en) * 2024-03-20 2024-06-07 四川开物信息技术有限公司 Vehicle weighing system

Also Published As

Publication number Publication date
CN113515985B (en) 2022-07-22

Similar Documents

Publication Publication Date Title
CN113515985B (en) Self-service weighing system, weighing detection method, weighing detection equipment and storage medium
CN105241533A (en) Automatic weighing metering system and method for granary
CN110046547A (en) Report method, system, computer equipment and storage medium violating the regulations
CN110232827B (en) Free flow toll collection vehicle type identification method, device and system
US11302191B2 (en) Method and apparatus for calculating parking occupancy
CN111368612B (en) Overguard detection system, personnel detection method and electronic equipment
CN113063480A (en) One yard leads to unmanned on duty prevents weighing system that practises fraud
CN109374097A (en) Garbage disposal site intelligent management system and method based on highway ETC
CN113112798B (en) Vehicle overload detection method, system and storage medium
CN110599081A (en) Method and system for intelligently managing grain warehouse-in and warehouse-out business
CN103630221A (en) Method for remotely monitoring vehicle weighing cheating by using video analyzing
CN111397712A (en) Freight vehicle wagon balance monitoring method for preventing weighing cheat
CN110995771A (en) Freight train land transportation monitoring management system based on thing networking
CN111412969A (en) Intelligent management method and system for truck scale
CN114067295A (en) Method and device for determining vehicle loading rate and vehicle management system
CN108805184B (en) Image recognition method and system for fixed space and vehicle
CN111950368A (en) Freight vehicle monitoring method, device, electronic equipment and medium
CN115600953A (en) Monitoring method and device for warehouse positions, computer equipment and storage medium
US20230114688A1 (en) Edge computing device and system for vehicle, container, railcar, trailer, and driver verification
US9625402B2 (en) Method and apparatus for detection of defective brakes
CN110782200A (en) Intelligent management system and method for logistics vehicles
CN116772987A (en) Method, device, system and storage medium for detecting wagon balance
KR20150039925A (en) Smart weighing system
CN115830507A (en) Cargo management method and device
CN114399671A (en) Target identification method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant