CN114860359B - Image processing method, system and storage medium - Google Patents

Image processing method, system and storage medium Download PDF

Info

Publication number
CN114860359B
CN114860359B CN202210356175.0A CN202210356175A CN114860359B CN 114860359 B CN114860359 B CN 114860359B CN 202210356175 A CN202210356175 A CN 202210356175A CN 114860359 B CN114860359 B CN 114860359B
Authority
CN
China
Prior art keywords
information
current
target
area information
matching degree
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210356175.0A
Other languages
Chinese (zh)
Other versions
CN114860359A (en
Inventor
肖喜中
徐海华
魏溪含
陈伟璇
赵朋飞
杨昭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Damo Institute Hangzhou Technology Co Ltd
Original Assignee
Alibaba Damo Institute Hangzhou Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Damo Institute Hangzhou Technology Co Ltd filed Critical Alibaba Damo Institute Hangzhou Technology Co Ltd
Priority to CN202210356175.0A priority Critical patent/CN114860359B/en
Publication of CN114860359A publication Critical patent/CN114860359A/en
Application granted granted Critical
Publication of CN114860359B publication Critical patent/CN114860359B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • G06V10/235Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition based on user input or interaction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30264Parking

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Traffic Control Systems (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an image processing method, an image processing system and a storage medium. The method comprises the following steps: displaying current area information and target area information associated with a first object on an interactive interface, wherein the current area information is used for representing area information of a current area where the first object is located, and the target area information is used for representing area information of a target area where the first object needs to stop; acquiring the matching degree displayed on the interactive interface based on the current region information and the target region information, wherein the matching degree is used for representing the matching degree between the current region and the target region; and in response to the matching degree being greater than the target threshold, displaying first prompt information on the interactive interface, wherein the first prompt information is used for prompting that when the first object is parked in the current area, the detection equipment is allowed to detect a second object carried by the first object, and the detection equipment is deployed at a position associated with the target area. The invention solves the technical problem of low efficiency of detecting the object.

Description

Image processing method, system and storage medium
Technical Field
The present invention relates to the field of computers, and in particular, to an image processing method, system, and storage medium.
Background
At present, in the process of judging the object, a certain number of image acquisition devices are usually arranged in a system, the image acquisition devices carry out chromatography on the bearing object of the movable object, and the object is judged based on the quality of the analysis result of the image acquisition devices, but when the image acquisition devices are partially shielded, the judgment result is inaccurate, so that the method still has the technical problem of low efficiency of detecting the object.
In view of the above problems, no effective solution has been proposed at present.
Disclosure of Invention
The embodiment of the invention provides an image processing method, an image processing system and a storage medium, which are used for at least solving the technical problem of low efficiency of object detection.
According to an aspect of an embodiment of the present invention, there is provided an image processing method. The method may include: displaying current area information and target area information associated with a first object on an interactive interface, wherein the current area information is used for representing area information of a current area where the first object is located, and the target area information is used for representing area information of a target area where the first object needs to stop; acquiring the matching degree displayed on the interactive interface based on the current region information and the target region information, wherein the matching degree is used for representing the matching degree between the current region and the target region; and in response to the matching degree being greater than the target threshold, displaying first prompt information on the interactive interface, wherein the first prompt information is used for prompting that when the first object is parked in the current area, the detection equipment is allowed to detect a second object carried by the first object, and the detection equipment is deployed at a position associated with the target area.
According to another aspect of the embodiment of the invention, another image processing method is also provided. The method may include: acquiring current region information and target region information associated with a first object, wherein the current region information is used for representing region information of a current region where the first object is located, and the target region information is used for representing region information of a target region to which the first object needs to stop; determining a matching degree based on the current region information and the target region information, wherein the matching degree is used for representing the matching degree between the current region and the target region; and sending first prompt information to the client in response to the matching degree being greater than the target threshold, wherein the first prompt information is used for prompting that when the first object is parked in the current area, the detection equipment is allowed to detect a second object borne by the first object, and the detection equipment is deployed at a position associated with the target area.
According to another aspect of the embodiment of the invention, another image detection method of a vehicle is also provided. The method may include: displaying current area information and target area information associated with the vehicle on an interactive interface, wherein the current area information is used for indicating the area information of a current area where the vehicle is located, and the target area information is used for indicating the area information of a target area where the vehicle needs to stop; acquiring the matching degree displayed on the interactive interface based on the current region information and the target region information, wherein the matching degree is used for representing the matching degree between the current region and the target region; and in response to the matching degree being greater than the target threshold, displaying first prompt information on the interactive interface, wherein the first prompt information is used for prompting that when the vehicle is parked in the current area, the detection equipment is allowed to detect the industrial object borne by the vehicle and is deployed at the position associated with the target area.
According to another aspect of the embodiment of the present invention, there is also provided an image processing apparatus. The apparatus may include: the first display unit is used for displaying current area information and target area information associated with the first object on the interactive interface, wherein the current area information is used for representing the area information of a current area where the first object is located, and the target area information is used for representing the area information of a target area where the first object needs to stop; the first acquisition unit is used for acquiring the matching degree displayed on the interactive interface based on the current region information and the target region information, wherein the matching degree is used for representing the matching degree between the current region and the target region; the second display unit is used for responding to the fact that the matching degree is larger than the target threshold value, displaying first prompt information on the interactive interface, wherein the first prompt information is used for prompting that when the first object is parked in the current area, the detection equipment is allowed to detect a second object borne by the first object, and the detection equipment is deployed at a position associated with the target area.
According to another aspect of the embodiment of the present invention, there is also provided another image processing apparatus. The apparatus may include: the second acquisition unit is used for acquiring current area information and target area information associated with the first object, wherein the current area information is used for indicating the area information of a current area where the first object is positioned, and the target area information is used for indicating the area information of a target area where the first object needs to stop; a determining unit configured to determine a matching degree based on the current region information and the target region information, wherein the matching degree is used to represent a matching degree between the current region and the target region; the sending unit is used for responding to the matching degree being larger than the target threshold value and sending first prompt information to the client, wherein the first prompt information is used for prompting that when the first object is parked in the current area, the detection equipment is allowed to detect the second object borne by the first object, and the detection equipment is deployed at the position associated with the target area.
According to another aspect of the embodiment of the present invention, there is also provided an image detection apparatus of a vehicle. The apparatus may include: the third display unit is used for displaying current area information and target area information associated with the vehicle on the interactive interface, wherein the current area information is used for indicating the area information of a current area where the vehicle is located, and the target area information is used for indicating the area information of a target area where the vehicle needs to stop; the third acquisition unit is used for acquiring the matching degree displayed on the interactive interface based on the current region information and the target region information, wherein the matching degree is used for representing the matching degree between the current region and the target region; and the fourth display unit is used for responding to the fact that the matching degree is larger than the target threshold value, and displaying first prompt information on the interactive interface, wherein the first prompt information is used for prompting that when the vehicle is parked in the current area, the detection equipment is allowed to detect the industrial object borne by the vehicle, and the detection equipment is deployed at the position associated with the target area.
According to another aspect of the embodiment of the present invention, there is also provided an image processing system including: the server is used for acquiring current area information and target area information associated with the first object, and determining the matching degree based on the current area information and the target area information, wherein the current area information is used for representing the area information of a current area where the first object is located, and the target area information is used for representing the area information of a target area where the first object needs to stop; the client is used for responding to the fact that the matching degree is larger than the target threshold value, displaying first prompt information, wherein the first prompt information is used for prompting that when the first object is parked in the current area, the detection device is allowed to detect a second object borne by the first object, and the detection device is deployed at a position associated with the target area.
According to another aspect of the embodiments of the present invention, there is also provided a computer-readable storage medium, including a stored program, where the program, when run, controls a device on which the storage medium is located to execute the image processing method of any one of the above.
According to another aspect of the embodiments of the present invention, there is also provided a processor for running a program, wherein the image processing method of any one of the above is performed when the program is running.
In the embodiment of the application, current area information and target area information associated with a first object are displayed on an interactive interface, wherein the current area information is used for representing the area information of a current area where the first object is positioned, and the target area information is used for representing the area information of a target area where the first object needs to stop; acquiring the matching degree displayed on the interactive interface based on the current region information and the target region information, wherein the matching degree is used for representing the matching degree between the current region and the target region; and in response to the matching degree being greater than the target threshold, displaying first prompt information on the interactive interface, wherein the first prompt information is used for prompting that when the first object is parked in the current area, the detection equipment is allowed to detect a second object carried by the first object, and the detection equipment is deployed at a position associated with the target area. That is, the present area information of the first object is positioned in a real-time interaction manner, and when the present area information has a good matching degree with the target area information, the second object carried by the first object can be detected, so that the detection of the second object can be more stable and accurate through a good stop area, thereby realizing the technical effect of improving the detection efficiency of the object, and solving the technical problem of low detection efficiency of the object.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute a limitation on the application. In the drawings:
Fig. 1 is a block diagram of a hardware structure of a computer terminal (or mobile device) of an image processing method according to an embodiment of the present invention;
FIG. 2 is a block diagram of a computing environment for an image processing method according to an embodiment of the present invention;
FIG. 3 is a flow chart of an image processing method according to an embodiment of the present invention;
FIG. 4 is a flowchart of another image processing method according to an embodiment of the present invention;
Fig. 5 is a flowchart of an image detection method of a vehicle according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a scrap grade stop indication system in accordance with an embodiment of the present invention;
FIG. 7 is a flow chart of a scrap steel grade determining parking indication method in accordance with an embodiment of the present invention;
FIG. 8 is a flow chart of a vehicle positioning algorithm according to an embodiment of the invention;
FIG. 9 is a schematic illustration of an interactive interface display according to an embodiment of the invention;
FIG. 10 is a block diagram of a service grid of a method of object detection according to an embodiment of the present invention;
Fig. 11 is a schematic diagram of an image processing apparatus according to an embodiment of the present invention;
Fig. 12 is a schematic view of another image processing apparatus according to an embodiment of the present invention;
Fig. 13 is a schematic view of an image detection apparatus of a vehicle according to an embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present invention and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
First, partial terms or terminology appearing in the course of describing embodiments of the application are applicable to the following explanation:
The full convolution neural network (Fully Convolutional Networks, abbreviated as FCN) classifies images at pixel level, so that the image segmentation at semantic level is solved, an input image with any size can be accepted, an image of a last convolution layer is up-sampled by adopting a deconvolution layer, the image is restored to the same size of the input image, a prediction can be generated for each pixel, meanwhile, the spatial information in the original input image is reserved, and the pixel-by-pixel classification is realized on the up-sampled feature image;
a feedforward neural network (Feedforward neural network, abbreviated as Fnn) is one type of artificial neural network, in the neural network, each neuron starts from an input layer, receives a previous input and inputs the previous input to a next stage until an output layer, no feedback exists in the whole network, the whole network can be represented by a directed acyclic graph, the feedforward neural network is the earliest proposed artificial neural network and is also the simplest artificial neural network type, and the feedforward neural network can be divided into a single-layer feedforward neural network and a multi-layer feedforward neural network according to different layers of the feedforward neural network;
Masks (masks), which are containers defining a set of graphics and using them as semi-transparent media, can be used to combine foreground objects and background, attached to layers, and exist as properties of layers, like effects, transformations;
The parking indication system is an indication system capable of guiding vehicles to smoothly enter a target parking space, and can be an intelligent parking guide system for guiding vehicles to park into an empty parking space in a parking lot, the detector is used for detecting the parking space, the display screen is used for displaying empty parking space information, and the easy parking is realized through the displayed information.
Example 1
In accordance with an embodiment of the present invention, there is also provided a method embodiment of an image processing method, it being noted that the steps shown in the flowchart of the figures may be performed in a computer system such as a set of computer executable instructions, and, although a logical order is shown in the flowchart, in some cases, the steps shown or described may be performed in an order other than that shown or described herein.
The method according to the first embodiment of the present application may be implemented in a mobile terminal, a computer terminal or a similar computing device. Fig. 1 is a block diagram of a hardware structure of a computer terminal (or mobile device) of an image processing method according to an embodiment of the present application. As shown in fig. 1, the computer terminal 10 (or mobile device 10) may include one or more processors (shown as 102a, 102b, … …,102n in the figures) which may include, but are not limited to, a microprocessor MCU or a processing device such as a programmable logic device FPGA, a memory 104 for storing data, and a transmission module 106 for communication functions. In addition, the method may further include: a display, an input/output interface (I/O interface), a Universal Serial BUS (USB) port (which may be included as one of the ports of the BUS), a network interface, a power supply, and/or a camera. It will be appreciated by those of ordinary skill in the art that the configuration shown in fig. 1 is merely illustrative and is not intended to limit the configuration of the electronic device described above. For example, the computer terminal 10 may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
It should be noted that the one or more processors and/or other image processing circuits described above may be referred to generally herein as "image processing circuits. The image processing circuitry may be embodied in whole or in part in software, hardware, firmware, or any other combination. Furthermore, the image processing circuitry may be a single stand-alone processing module, or incorporated in whole or in part into any of the other elements in the computer terminal 10 (or mobile device). As referred to in embodiments of the application, the image processing circuitry acts as a processor control (e.g., selection of the termination path of the variable resistor connected to the interface).
The memory 104 may be used to store software programs and modules of application software, such as program instructions/data storage devices corresponding to the image processing methods in the embodiments of the present invention, and the processor executes the software programs and modules stored in the memory 104, thereby performing various functional applications and image processing, that is, implementing the image processing methods described above. Memory 104 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory located remotely from the processor, which may be connected to the computer terminal 10 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission means 106 is arranged to receive or transmit data via a network. The specific examples of the network described above may include a wireless network provided by a communication provider of the computer terminal 10. In one example, the transmission device 106 includes a network adapter (Network Interface Controller, NIC) that can connect to other network devices through a base station to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module for communicating with the internet wirelessly.
The display may be, for example, a touch screen type Liquid Crystal Display (LCD) that may enable a user to interact with a user interface of the computer terminal 10 (or mobile device).
Fig. 1 shows a block diagram of a hardware structure, not only as an exemplary block diagram of the computer terminal 10 (or mobile device) described above, but also as an exemplary block diagram of the server described above, and in an alternative embodiment, fig. 2 is a block diagram of a computing environment of an image processing method according to an embodiment of the present invention, and fig. 2 shows, in a block diagram, an embodiment in which the computer terminal 10 (or mobile device) shown in fig. 1 described above is used as a computing node in the computing environment 201. Fig. 2 is a block diagram of a computing environment for an image processing method according to an embodiment of the present invention, as shown in fig. 2, the computing environment 201 includes a plurality of computing nodes (e.g., servers) running on a distributed network (shown as 210-1, 210-2, …). Each computing node contains local processing and memory resources and end user 202 may run applications or store data remotely in computing environment 201. The application may be provided as a plurality of services 220-1,220-2,220-3 and 220-4 in computing environment 301, representing services "A", "D", "E", and "H", respectively.
End user 202 may provide and access services through a web browser or other software application on a client, in some embodiments, provisioning and/or requests of end user 202 may be provided to portal gateway 230. Ingress gateway 230 may include a corresponding agent to handle provisioning and/or request for service 220 (one or more services provided in computing environment 201).
Services 220 are provided or deployed in accordance with various virtualization techniques supported by computing environment 201. In some embodiments, service 220 may be provided in accordance with Virtual Machine (VM) based virtualization, container based virtualization, and/or the like. Virtual machine-based virtualization may be the emulation of a real computer by initializing a virtual machine, executing programs and applications without directly touching any real hardware resources. While a virtual machine virtualizes a machine, according to container-based virtualization, a container may be started to virtualize the entire Operating System (OS) so that multiple workloads may run on a single operating system instance.
In one embodiment based on container virtualization, several containers of service 220 may be assembled into one POD (e.g., kubernetes POD). For example, as shown in FIG. 2, the service 220-2 may be equipped with one or more PODs 240-1,240-2, …,240-N (collectively PODs 240). Each POD 240 may include an agent 245 and one or more containers 242-1,242-2, …,242-M (collectively referred to as containers 242). One or more containers 242 in the POD 240 handle requests related to one or more corresponding functions of the service, and the agent 245 generally controls network functions related to the service, such as routing, load balancing, etc. Other services 220 may also accompany PODs similar to POD 240.
In operation, executing a user request from end user 202 may require invoking one or more services 220 in computing environment 201, executing one or more function pits of one service 220, and invoking one or more functions of another service 220. As shown in FIG. 2, service "A"220-1 receives a user request of end user 202 from ingress gateway 230, service "A"220-1 may invoke service "D"220-2, and service "D"220-2 may request service "E"220-3 to perform one or more functions.
The computing environment may be a cloud computing environment, and the allocation of resources is managed by a cloud service provider, allowing the development of functions without considering the implementation, adjustment or expansion of the server. The computing environment allows developers to execute code that responds to events without building or maintaining a complex infrastructure. Instead of expanding a single hardware device to handle the potential load, the service may be partitioned to a set of functions that can be automatically scaled independently.
In the above-described operating environment, the present application provides an image processing method as shown in fig. 3. It should be noted that, the image processing method of this embodiment may be performed by the mobile terminal of the embodiment shown in fig. 1.
Fig. 3 is a flowchart of an image processing method according to a first embodiment of the present invention, and as shown in fig. 3, the method may include the steps of:
step S302, displaying current area information and target area information associated with the first object on an interactive interface, wherein the current area information is used for indicating the area information of a current area where the first object is located, and the target area information is used for indicating the area information of a target area where the first object needs to stop.
In the technical solution provided in the above step S302 of the present invention, the current area information associated with the first object is obtained, and the current area information associated with the first object and the target area information to which the first object needs to stop are displayed on the interactive interface, where the interactive interface may be an interface of the mobile terminal, for example, may be an operation interface of the portable terminal, may be a mobile phone, a notebook, a tablet computer, even include a vehicle-mounted computer, etc., as long as the displayable interface may be used as the own interactive interface, without specific limitation; the first object may be any object in the target area information, and may be a movable object, for example, an automobile, a scrap steel vehicle, or the like; the current region information may be position information where the current actual position of the first object is located, where the position information may be displayed on the interactive interface in the form of a virtual frame, may be used to represent region information of a region where the first object is currently located, and may include position information, direction information, and the like; the target area information may be preset parking position information, may be displayed on the interactive interface in the form of a virtual frame, or may be drawn on the ground in the form of a real frame, to indicate a position where the first object needs to be parked.
Optionally, the first object may be scanned by the handheld terminal to obtain information of the first object, the scanned information is sent to the server, the server may process the obtained information to obtain target area information and current area information of the first object, and the target area information and the current area information are transmitted and displayed on the interactive interface.
Optionally, when the first object is a mobile vehicle, the first object enters the target area, the first object can be scanned by the handheld terminal, information of the first object is determined, and the determined information can be information such as license plates, system numbers, azimuth and the like of the first object; the determined information is sent to a server, the server acquires the information, the current area information of the first object can be determined by using a vehicle positioning algorithm, and meanwhile, the first object is imaged to determine the target area information; the target area information and the current area information can be acquired by a control program on the server, and the acquired information is transmitted to an interactive interface of the mobile terminal by the controller for display, wherein the transmission modes can include wireless communication, wired communication and the like, and the method is not particularly limited.
Step S304, based on the current area information and the target area information, obtaining the matching degree displayed on the interactive interface, wherein the matching degree is used for representing the matching degree between the current area and the target area.
In the technical solution provided in the above step S304 of the present invention, the position of the first object is adjusted based on the target area information displayed on the interface, the current area information is determined based on the position of the first object, and the matching degree displayed on the interactive interface is obtained based on the current area information and the target area information, where the matching degree may be used to represent the matching degree between the current area and the target area, and may be calculated by a control algorithm, for example, may be displayed on the interactive interface in a digital, hanzi, percentage, etc. manner, and the display manner of the matching degree is not specifically limited herein.
Optionally, the current area information and the target area information may be displayed on the interactive interface in the form of a virtual frame, or may be displayed on the target interface in the form of black-and-white or color pictures; determining the matching degree based on the current region information and the target region information, and displaying the matching degree on an interactive interface in the forms of numbers, percentages, scores, chinese characters and the like, wherein the matching degree can be displayed on the same interactive interface as the current region information and the target region information or not; and adjusting the first object based on the matching degree on the interactive interface, displaying the adjusted current region information and target region information on the interactive interface, and determining the matching degree based on the adjusted current region information and target region information.
For example, the user of the portable terminal, or the driver, may perform fine adjustment of the position of the first object by using the matching degree displayed on the interactive interface or the auxiliary information provided on other interfaces, for example, the auxiliary information may include information guiding the movement of the first object such as leftward movement, rightward movement or forward and backward movement, and may repeat the process several times, and obtain the matching degree displayed on the interactive interface based on the current area information and the target area information.
In step S306, in response to the matching degree being greater than the target threshold, first prompt information is displayed on the interactive interface, where the first prompt information is used to prompt that, when the first object is parked in the current area, the detection device is allowed to detect the second object carried by the first object, and the detection device is deployed at a location associated with the target area.
In the technical solution provided in the above step S306 of the present invention, when the matching degree between the target area information and the current area information is detected to be greater than the target threshold, and in response to the matching degree being greater than the target threshold, first prompt information is displayed on the interactive interface, where the first prompt information is used to prompt that, when the first object is parked in the current area, the detection device is allowed to detect the second object carried by the first object, and may be a component displayed on the interactive interface, for example, when the matching degree is greater than the target threshold, the first prompt information of "whether the detection device is allowed to detect the second object carried by the first object" is displayed on the interactive interface, and the "yes" component may be selected by clicking to allow the detection device to detect the second object carried by the first object; the second object may be an object placed on the first object, for example, may be scrap steel on a vehicle; the detection device is disposed at a location associated with a target area, for example, a side or a corner of the target area, and the target area may be a placement area of a first object such as a fixed parking lot.
Optionally, when the matching degree of the target position information and the current position information of the first object is detected to be greater than the target threshold, a first prompt signal may be output to the mobile terminal, the first prompt information may be displayed on the interactive interface, whether the detection device is allowed to detect the second object carried by the first object, and in response to selection of the allowing button, the detection device disposed at the position associated with the target area is allowed to detect the second object carried by the first object.
Through the steps S302 to S306, the present area information and the target area information associated with the first object are displayed on the interactive interface, wherein the present area information is used for indicating the area information of the present area where the first object is located, and the target area information is used for indicating the area information of the target area where the first object needs to stop; acquiring the matching degree displayed on the interactive interface based on the current region information and the target region information, wherein the matching degree is used for representing the matching degree between the current region and the target region; in response to the matching degree being greater than the target threshold, displaying first prompt information on the interactive interface, wherein the first prompt information is used for prompting that when the first object is parked in the current area, the detection equipment is allowed to detect a second object carried by the first object, and the detection equipment is deployed at a position associated with the target area, that is, the current area information of the first object is positioned in a real-time interaction mode, and when the current area information and the target area information have good matching degree, the second object carried by the first object is detected, so that the detection of the second object is more stable and accurate through a good parking area, further the technical effect of improving the detection efficiency of the object is achieved, and the technical problem of low detection efficiency of the object is solved.
The above-described method of this embodiment is further described below.
As an optional implementation manner, step S304, based on the current area information and the target area information, acquires a matching degree displayed on the interactive interface, including: and acquiring the matching degree from the server, wherein the matching degree is determined by the server based on the current region information and the target region information.
In this embodiment, the matching degree may be determined based on the current area information and the target area information, for example, the current area information and the target area information may be detected by a control algorithm on the server, the matching degree of the current area information and the target area information displayed on the interactive interface may be determined, and the interactive interface may acquire the matching degree from the server and display the matching degree on the interactive interface.
Optionally, in the related art, the on-site distance sensor positioning information is usually combined to determine whether the vehicle is parked in place, but the cost is high, while the application determines the matching degree based on the current area information and the target area information, and determines whether the target vehicle is parked in place based on the matching degree, and all the information does not need to be transmitted to the mobile terminal, so that the interaction of the front-end and rear-end vehicle positioning algorithms can be reduced in bandwidth and realized quickly.
As an optional implementation manner, in response to the matching degree not being greater than the target threshold, second prompt information is displayed on the interactive interface, wherein the second prompt information is used for prompting to adjust the current area information.
In this embodiment, the matching degree from the server is obtained, and when the matching degree is not greater than the target threshold, second prompt information is displayed on the interactive interface in response to the matching degree being not greater than the target threshold, where the second prompt information may be auxiliary information displayed on a component on the interactive interface to guide movement of the vehicle, and for example, the auxiliary information may include: information such as leftward movement, rightward movement or forward and backward movement, and the like, for prompting an operation object; the operation object may be a driver, a handset of the mobile terminal, or the like, and is not particularly limited, and the current area information (current position) of the first object is adjusted.
Optionally, the user of the portable terminal, or the driver, may repeat the adjustment of the position of the current area information of the first object through the matching degree displayed on the interactive interface or the second prompt information provided on other interfaces, where the auxiliary information may be displayed on the same interactive interface as the matching degree, or may be displayed on different interactive interfaces, and the present invention is not limited specifically.
For example, there may be a simplified diagram of the current region information and the target region information on the interactive interface, and the fine tuning process may be fully embodied on the interactive interface, and the position of the first object may be adjusted by the matching degree displayed on the interactive interface or the second prompt information provided on other interfaces.
As an optional implementation manner, based on the current area information and the target area information, obtaining the matching degree displayed on the interactive interface includes: and acquiring the adjusted matching degree displayed on the interactive interface based on the target area information and the adjusted current area information.
In this embodiment, the target area information of the first object is adjusted, the matching degree between the adjusted current area information and the target area information is compared, and the calculated matching degree is displayed on the interactive interface.
As an alternative implementation manner, the movement information of the first object is displayed on the interactive interface, wherein the movement information is used for representing the movement state of the first object and is used for adjusting the current area information with the matching degree.
In this embodiment, movement information of the first object may be displayed on the interactive interface, wherein the movement information may be used to represent a movement state of the first object, and may be auxiliary information guiding movement of the vehicle, such as movement to the left, movement to the right, movement to the front and back, and so on.
Optionally, there are a plurality of different interactive interfaces in the software, and the interactive interface of the software can be changed through sliding or clicking operation, which is not the same as that of the interactive interface of which the display position matching degree can be changed into the interactive interface for providing the mobile information through sliding, and the position of the first object can be adjusted based on the mobile information.
As an alternative embodiment, displaying the current region information and the target region information associated with the first object on the interactive interface includes: the current region information and the target region information are displayed on the interactive interface, and information other than the current region information and the target region information is prohibited from being displayed.
In this embodiment, only the current area information and the target area information need to be displayed on the interactive interface, and to ensure a low bandwidth, displaying information other than the current area information and the target area information on the interactive interface displaying the current area information and the target area information is prohibited, where the information prohibited from being displayed may be information such as a color image acquired by the current camera.
Optionally, on the interactive interface for displaying the current area information and the target area information, all the information acquired by the camera is not required to be displayed, and only the virtual frame of the target area information and the virtual frame of the current area information can be displayed for real time and low bandwidth, so that the low bandwidth is ensured and the safety of the image information is improved.
In the embodiment of the application, current area information and target area information associated with a first object are displayed on an interactive interface, wherein the current area information is used for representing the area information of a current area where the first object is positioned, and the target area information is used for representing the area information of a target area where the first object needs to stop; acquiring the matching degree displayed on the interactive interface based on the current region information and the target region information, wherein the matching degree is used for representing the matching degree between the current region and the target region; in response to the matching degree being greater than the target threshold, displaying first prompt information on the interactive interface, wherein the first prompt information is used for prompting that when the first object is parked in the current area, the detection equipment is allowed to detect a second object carried by the first object, and the detection equipment is deployed at a position associated with the target area, that is, the current area information of the first object is positioned in a real-time interaction mode, and when the current area information and the target area information have good matching degree, the second object carried by the first object is detected, so that the detection of the second object is more stable and accurate through a good parking area, further the technical effect of improving the detection efficiency of the object is achieved, and the technical problem of low detection efficiency of the object is solved.
The embodiment of the invention also provides another image processing method from the server side.
Fig. 4 is a flowchart of another image processing method according to an embodiment of the present invention. As shown in fig. 4, the method may include the following steps.
Step S402, acquiring current area information and target area information associated with a first object, wherein the current area information is used for indicating the area information of a current area where the first object is located, and the target area information is used for indicating the area information of a target area where the first object needs to stop.
In the technical scheme provided in the step S402, the server acquires current area information and target area information associated with the first object, where the current area information may be used to represent information of an area where the first object is currently located; the target area information can be used for indicating the area information of the first object to be parked to the target area, and the target area can be any place where the first object area can be placed and can be an actual planning place; the first object may be any object in the target area information, and may be a movable object, such as an automobile, a scrap steel vehicle, or the like.
Optionally, when the first object enters the target area, acquiring information of the first object, and processing the acquired information, where the processing may be processing the acquired data by using an algorithm, and the processing mode is not specifically limited herein; current region information and target region information associated with a first object are determined.
For example, when the first object is a vehicle, the first object enters a target area, and obtains coming vehicle information of the first object, where the coming vehicle information may include license plates and information such as a system number and a direction of the first object; the information obtained by scanning is sent to a server, and a vehicle positioning algorithm on the server can perform real-time operation based on the coming vehicle information to determine the current area information; simultaneously, imaging a preset visual field to obtain target area information; the control program in the server can be utilized to enable the server to acquire the current area information and the target area information, wherein the coming information can be obtained by scanning a handheld terminal of a field worker.
It should be noted that, in the related art, a distance sensor is additionally added at a fixed parking position to determine the information of the coming vehicle, but the method has a long laying period and has a problem of increasing the integration cost of the system.
Step S404, determining a matching degree based on the current region information and the target region information, wherein the matching degree is used for representing the matching degree between the current region and the target region.
In the technical scheme provided in the step S404, the position of the first object is continuously adjusted based on the target area information displayed on the interactive interface, the current area information is determined based on the adjusted position of the first object, and the matching degree is determined based on the current area information and the target area information, wherein the matching degree can be used for indicating the matching degree between the current area and the target area and can be calculated by a control algorithm.
In step S406, in response to the matching degree being greater than the target threshold, first prompt information is sent to the client, where the first prompt information is used to prompt that, when the first object is parked in the current area, the detection device is allowed to detect the second object carried by the first object, and the detection device is deployed at a location associated with the target area.
In the technical solution provided in the above step S406 of the present invention, the matching degree is determined based on the current area information and the target area information, whether the matching degree is greater than the target threshold is determined, and in response to the matching degree being greater than the target threshold, the server sends the first prompt message to the client, where the target threshold may be a set value, which is not specifically limited herein, and the first prompt message is used to prompt that when the first object is parked in the current area, the detection device is allowed to detect the second object carried by the first object, and may be a component displayed on the interactive interface, for example, when the matching degree is greater than the target threshold, the first prompt message of whether the detection device is allowed to detect the second object carried by the first object is displayed on the interactive interface, and the "yes" component is selected by clicking, where it is allowed to detect the second object carried by the first object by the detection device, which is only illustrated herein, and the display mode of the first prompt message is not specifically limited; the second object may be an object placed on the first object, for example, may be scrap steel on a vehicle; the detection device is disposed at a location associated with a target area, for example, a side or a corner of the target area, and the target area may be a placement area of a first object such as a fixed parking lot.
Optionally, when the matching degree of the current area information and the target area information exceeds the target threshold, a first prompt signal is output to the mobile terminal (for example, the mobile terminal may be a portable terminal), the first prompt information is displayed on the interactive interface, the first prompt information may be whether the detection device is allowed to detect the second object carried by the first object, and in response to selecting the allowing button, the detection device disposed at the position associated with the target area is allowed to detect the second object carried by the first object.
For example, when the first object is a vehicle, the first object enters a target area, current area information and target area information of a first object are determined, matching degree of the current area information and the target area information is determined based on the current area information and the target area information of the first object, when the matching degree of the current area information and the target area information is greater than a target threshold value, the server sends first prompt information to the client, and in response to the client terminal selecting to allow detection of a second object carried by the first object by the detection device, the detection device starts detection of the second object, the detection device can detect a captured picture, and information of the second object is determined.
Through the steps S402 to S406, the present area information and the target area information associated with the first object are obtained, wherein the present area information is used for indicating the area information of the present area where the first object is located, and the target area information is used for indicating the area information of the target area where the first object needs to stop; determining a matching degree based on the current region information and the target region information, wherein the matching degree is used for representing the matching degree between the current region and the target region; and sending first prompt information to the client in response to the matching degree being greater than the target threshold, wherein the first prompt information is used for prompting that when the first object is parked in the current area, the detection equipment is allowed to detect a second object borne by the first object, and the detection equipment is deployed at a position associated with the target area. That is, the present area information of the first object is positioned in a real-time interaction manner, and when the present area information and the target area information have a good matching degree, the second object carried by the first object is detected, so that the detection of the second object is more stable and accurate through a good stop area, the technical effect of improving the detection efficiency of the object is realized, and the technical problem of low detection efficiency of the object is solved.
The above-described method of this embodiment is further described below.
As an alternative embodiment, acquiring current region information associated with the first object includes: acquiring an original image of a first object acquired by image acquisition equipment, wherein the image acquisition equipment is arranged at a position associated with a target area; determining mask information of the first object based on the original image, wherein the mask information is used for representing the current position of the first object; the current region information is determined based on the mask information.
In this embodiment, an image capturing device may be disposed at a location associated with a target area, where current area information of a first image is captured by the image capturing device, an original image of the first object is obtained, mask information of the first object is determined based on the original image, and the current area information is determined based on the mask information, where the image capturing device may be a device capable of capturing an image, and may be a device having a camera, a video recording, such as a video camera, a mobile phone, a monitoring device, and the like, and is not limited herein specifically; the original image may be an acquired original color image; the mask information may be 1 and 0, and a region having a mask value of 1 may be represented as current region information of the first object; the mask information may be a mask of current area information of the vehicle.
Optionally, when the first object enters the target area, the image capturing device may capture current area information of the first image, so as to obtain an original image of the first object, for example, when the image capturing device is a camera, the original image of the first object may be obtained in a shooting mode, the original image may be processed by a deep learning algorithm, mask information of the first object may be obtained, and the current area information may be determined based on the mask information.
Optionally, an original image of the first object may be acquired by the image acquisition device, the acquired original image is input into a positioning algorithm to obtain a mask of the current area information of the vehicle, the positioning algorithm may be a deep learning algorithm, the mask information may be 1 and 0, the area with the mask value of 1 may be represented as the current area information of the vehicle, and the current area information is determined through the above process, so as to complete positioning of the vehicle.
Alternatively, in this embodiment, when the image capturing device is a camera and the first object is a vehicle, the camera is completely separated from the vehicle, and the camera may be not known to be in a fixed position according to the actual requirement, for example, may be disposed on a side or a corner of a fixed parking lot, etc., and is related to only the target area, and is unrelated to the target object.
As an alternative embodiment, acquiring target area information associated with the first object includes: in response to detecting the first object, outputting a control instruction to the image acquisition device, wherein the image acquisition device is deployed at a position associated with the target area; target area information acquired by the image acquisition apparatus in response to the control instruction is acquired.
In this embodiment, when the first object is in the target area, the first object on the target area is detected, and in response to detecting the first object on the target area, a control instruction is output to the image capturing device, and the image capturing device images the preset field of view in response to the control instruction, to obtain target area information.
Optionally, when the first object is a vehicle, after acquiring the vehicle information on the target area, a control instruction is output to the image acquisition device in response to the detected coming vehicle information of the first object, and the image acquisition device images the preset visual field in response to the control instruction to obtain the target area information.
As an optional implementation manner, the current area information and the target area information are issued to the client for display.
In this embodiment, the collected target area information and the current position information output by the positioning algorithm may be sent to the client for display by a control program in a wireless communication manner, where the client may be a mobile terminal or other terminals with an interactive interface, for example, may be software or an applet of a portable terminal.
It should be noted that, on the interactive interface of the present application, a color image acquired by the current camera may be displayed, or only a virtual frame of preset position information and a virtual frame of current position information may be displayed for real-time and low bandwidth, so as to ensure the security problem of the image data information of the user.
The embodiment of the invention also provides a vehicle image detection method in the scrap steel grade judging parking scene.
Fig. 5 is a flowchart of an image detection method of a vehicle according to an embodiment of the present invention. As shown in fig. 5, the method may include the following steps.
Step S502, displaying current area information and target area information associated with the vehicle on an interactive interface, wherein the current area information is used for indicating the area information of a current area where the vehicle is located, and the target area information is used for indicating the area information of a target area where the vehicle needs to stop.
In the technical solution provided in the above step S502 of the present invention, current area information and target area information associated with a vehicle are displayed on an interactive interface, where the current area information is used to represent area information of a current area where the vehicle is located, and the target area information is used to represent area information of a target area where the vehicle needs to stop, which may be a preset parking position defined on the ground, or may be a virtual frame only existing on the interactive interface.
Alternatively, since it is difficult to have space and stability at the scrap steel loading and unloading site to arrange a specific parking preset mark, current area information and target area information (which may be displayed in the form of virtual frames) associated with the vehicle may be displayed on the interactive interface, and the degree of matching between the current position information of the vehicle and the preset parking position may be perceived through the display on the interactive interface.
Step S504, based on the current area information and the target area information, obtaining the matching degree displayed on the interactive interface, wherein the matching degree is used for representing the matching degree between the current area and the target area.
In the technical solution provided in the above step S504 of the present invention, the matching degree of the current area information and the target area information is determined based on the current area information and the target area information, and the matching degree is displayed on the interactive interface, and the first object may be adjusted based on the matching degree, where the matching degree may be used to represent the matching degree between the current area and the target area.
In step S506, in response to the matching degree being greater than the target threshold, first prompt information is displayed on the interactive interface, where the first prompt information is used to prompt that, when the vehicle is parked in the current area, the detection device is allowed to detect the industrial object carried by the vehicle, and the detection device is deployed at a location associated with the target area.
In the technical scheme provided in the step S506, the matching degree of the current area information and the target area information is determined based on the current area information and the target area information, and the matching degree is displayed on the interactive interface, the first object may be adjusted based on the matching degree until the matching degree of the current area information and the target area information is greater than the target threshold value, and the first prompt information is displayed on the interactive interface corresponding to the matching degree being greater than the target threshold value, where the first prompt information is used to prompt that when the vehicle is parked in the current area, the detection device is allowed to detect the industrial object carried by the vehicle, and the detection device is disposed at the position associated with the target area.
Optionally, the first object may be adjusted based on the matching degree of the interactive interface until the matching degree is greater than the target threshold, other auxiliary information may be displayed on the interactive interface, for example, moving to the left, moving to the right, and the like, the first object is moved based on the auxiliary information, the matching degree of the current area information and the target area information is calculated in real time based on the movement of the first object, when the matching degree is greater than the target threshold, the first prompt information is displayed on the interactive interface in response to the matching degree being greater than the target threshold, the first prompt information may be a component allowed or not allowed to be displayed on the interactive interface, and the detection device detects the industrial object carried by the vehicle in response to the allowing operation.
Optionally, after stopping, the special steel scrap unloading system begins to unload steel scraps, after unloading one surface of the steel scraps, the camera captures a plurality of pictures, the steel scraps in the steel scraps are positioned and classified by the steel scrap grading system, and finally, the comprehensive judgment result is output by the judgment system after the whole car is unloaded.
As an alternative embodiment, the target area information is not deployed in the scene in which the vehicle is located.
In this embodiment, since it is difficult to have space and stability at the scrap steel loading and unloading site to arrange a specific parking preset mark, the target area information may not be disposed in a scene where the vehicle is located, and the current area information and the target area information (which may be displayed in the form of virtual boxes) associated with the vehicle may be displayed on the interactive interface.
As an alternative implementation mode, a mask of the current area is displayed on the interactive interface, wherein the mask is obtained by converting the current area information based on deep learning.
In this embodiment, based on the deep learning system, the current region information is converted to obtain a mask of the current region of the vehicle, and the mask of the current region is displayed on the interactive interface, where the mask may be 1 and 0, and the mask may be 1 and may be represented as the current region of the first object.
Optionally, the image acquisition device is disposed at a position associated with the target area, when the first object enters the target area, the current area information of the first image can be acquired through the image acquisition device, the acquired information can be processed through the deep learning algorithm to obtain a mask of the current area, and the mask of the current area is displayed on the interactive interface, so that the purpose of determining the current area information based on the mask information is achieved.
In the embodiment of the application, current area information and target area information associated with a vehicle are displayed on an interactive interface, wherein the current area information is used for indicating the area information of a current area where the vehicle is positioned, and the target area information is used for indicating the area information of a target area where the vehicle needs to stop; acquiring the matching degree displayed on the interactive interface based on the current region information and the target region information, wherein the matching degree is used for representing the matching degree between the current region and the target region; and in response to the matching degree being greater than the target threshold, displaying first prompt information on the interactive interface, wherein the first prompt information is used for prompting that when the vehicle is parked in the current area, the detection equipment is allowed to detect the industrial object borne by the vehicle and is deployed at the position associated with the target area. That is, the present area information of the first object is positioned in a real-time interaction manner, and when the present area information has a good matching degree with the target area information, the second object carried by the first object can be detected, so that the detection of the second object can be more stable and accurate through a good stop area, thereby realizing the technical effect of improving the detection efficiency of the object, and solving the technical problem of low detection efficiency of the object.
Example 2
The preferred embodiment of the above method of this example will be further described, specifically, a scrap steel grade stop indication system.
The scrap steel grading can be used for guiding clients and drivers to reasonably and properly pile scrap steel vehicles for parking, and is generally nested in an existing scrap steel grading system, in the scrap steel grading process, the related technology is based on vision, for example, a certain number of cameras are arranged, chromatography is performed on scrap steel in a carriage in the loading and unloading process, but the quality of the analysis result is based on vision, a large part of the analysis result depends on whether the vehicles are properly parked, the carriage needs to be all in the field of vision of the cameras, otherwise, the scrap steel grading is inaccurate due to lack of source information of the vision analysis.
In the related art, the scrap steel vehicle does not need to fix the parking position and only adopts a high-definition camera to automatically track and position, but because the guardrail of the scrap steel vehicle is often higher, after the position of the camera is fixed, even if the camera has the functions of three-dimensional rotation and optical zooming, under certain visual angles, the problems of incomplete grade information of the scrap steel and inaccurate result still exist because the guardrail of the vehicle is blocked greatly.
Based on the problems, a method for grading scrap steel and scrap steel of artificial intelligence is provided, the method autonomously identifies the contour information of a vehicle and combines the positioning information of a field distance sensor to determine whether the vehicle is parked in place, if the requirements are not met, a system prompts a driver to park the vehicle in place, but in the method, the system is specially provided with additional auxiliary hardware such as the distance sensor and the like, and the problems of relatively high paving cost and period exist.
Aiming at the problems, the application provides a scrap steel grade judging parking indication system which adopts a vehicle positioning algorithm to interact and indicate with a driver in real time, and does not need to transmit all information of a camera to software or a small program, so that the real-time interaction and interaction with the driver are realized, the reasonable parking is indicated, the loading and unloading and grade judging of scrap steel are carried out, the interaction of the front-end and rear-end vehicle positioning algorithm can be realized very low in bandwidth and fast, and the parking experience of the driver is improved.
Embodiments of the present invention are further described below.
In this embodiment, a parking indication system is installed at a fixed parking place where a vehicle loaded with scrap steel enters, and fig. 6 is a schematic diagram of a scrap steel grade determining parking indication system according to an embodiment of the present invention, as shown in fig. 6, the entire system may include a preset parking position 602, a camera 601, a server 604, and a portable terminal 605 (including but not limited to a cellular phone), wherein the portable terminal 605 may communicate with the server 604 wirelessly.
Fig. 7 is a flowchart of a scrap steel grade determining parking indication method according to an embodiment of the present invention, as shown in fig. 7, the method may include the steps of:
step S701, the scrap steel vehicle is driven into a preset parking position.
In this embodiment, after the server obtains the coming vehicle information, the camera is controlled to start imaging a preset visual field, a preset parking position can be obtained by performing real-time operation calculation through a vehicle positioning algorithm (which may be calculated through a deep learning algorithm) on the server, the control program on the server can transmit the collected preset parking position information and the position information output by the positioning algorithm to the portable terminal through wireless communication, and the scrap steel vehicle is driven into the preset parking position based on the position information.
Alternatively, the information of the coming vehicle can be obtained by scanning by the handheld terminal of the site worker, and the information such as license plates and system numbers of the coming vehicle can be sent to the server.
In step S702, the server captures a preset parking position/vehicle positioning algorithm position.
In this embodiment, the server captures the preset parking position/vehicle positioning algorithm position, where fig. 8 is a flowchart of a vehicle positioning algorithm according to an embodiment of the present invention, and as shown in fig. 8, the vehicle positioning algorithm position may be calculated by the following steps.
Step S801, an input image is acquired.
The input of the positioning algorithm is the original color image of the camera, and the original color image of the camera is obtained, so that the aim of obtaining the input head portrait is fulfilled.
In step S802, convolution layer 1 and maximum pooling processing are performed on the image.
In this embodiment, the image is subjected to convolution layer 1 and maximum pooling, where the convolution kernel of convolution layer 1 may be 64 (7*7) and the maximum pooling may be 3*3.
In step S803, the image is subjected to processing of the convolution layers 2 and 3.
Optionally, processing the image after the processing of the convolution layer 1 and the maximum pooling by the convolution layer 2 and the convolution layer 3, wherein the convolution kernel of the convolution layer 2 may be 64 (3*3); the convolution kernel of convolution layer 3 may be 128 (3*3).
In step S804, the processed image is subjected to the processing of the convolution layers 4 and 5.
Optionally, the images processed by the convolution layers 2 and 3 are processed by the convolution layers 4 and 5, wherein the convolution kernel of the convolution layer 4 may be 128 (3*3), and the step size is 2 (s=2); the convolution kernel of convolution layer 5 may be 256 (3*3).
In step S805, the processed image is subjected to processing of the convolution layers 6 and 7.
Optionally, the images processed by the convolution layers 4 and 5 are processed by the convolution layers 6 and 7, wherein the convolution kernel of the convolution layer 6 may be 256 (3*3), and the step size is 2 (s=2); the convolution kernel of convolution layer 7 may be 256 (3*3).
In step S806, the processed image is subjected to the processing of the convolution layers 8 and 9.
Optionally, the images processed by the convolution layers 6 and 7 are processed by the convolution layers 8 and 9, wherein the convolution kernel of the convolution layer 8 may be 256 (3*3), and the step size is 2 (s=2); the convolution kernel of convolution layer 9 may be 256 (3*3).
Step S807 performs processing of the convolution layer 7, convolution layer 10, addition layer 1, and up-sampling layer 1 on the processed image.
Optionally, the images processed by the convolution layers 8 and 9 are processed by the convolution layer 7, the convolution layer 10, the addition layer 1 and the up-sampling layer 1, where the convolution kernel of the convolution layer 10 may be 256 (3*3) and the up-sampling layer 1 may be 2 times up-sampled.
Step S808 performs processing of the convolution layer 5, convolution layer 10, addition layer 2, and up-sampling layer 2 on the processed image.
Alternatively, the images processed by the convolution layer 7, the convolution layer 10, the addition layer 1, and the upsampling layer 1 are processed by the convolution layer 5, the convolution layer 10, the addition layer 2, and the upsampling layer 2, wherein the upsampling layer 2 may be 2 times upsampled.
Step S809 performs processing of upsampling 3 on the processed image.
Alternatively, the image processed by the convolution layer 5, the convolution layer 10, the addition layer 2, and the upsampling layer 2 is processed by the upsampling layer 3, wherein the upsampling layer 3 may be 4 times upsampled.
Step S810, outputting a mask image.
Optionally, a current vehicle position information mask is output through the deep learning algorithm in fig. 6, where the mask values may be 0 and 1, and the region where the mask value is 1 is the current vehicle position information.
Optionally, the preset parking position is calculated by a vehicle positioning algorithm.
In step S703, the portable terminal receives the position information, and displays and prompts the parking matching degree.
In this embodiment, the preset parking position and the actual position of the vehicle are compared to obtain the position matching degree, fig. 9 is a schematic diagram of the display of the interactive interface according to the embodiment of the present invention, and as shown in fig. 9, the preset parking position and the actual parking position of the vehicle are displayed on the interface on the portable terminal, and the position matching degree is displayed on the interactive interface.
Alternatively, because of the difficulty in space and stability of the scrap steel loading and unloading site, the specific parking preset mark is arranged, so that the preset parking space defined in fig. 9 can be virtual, i.e. only exists on the interface in fig. 9, and is presented to the driver, the matching degree between the current vehicle position and the preset parking position can be perceived through the interactive interface in fig. 9, and the difference between the vehicle position and the preset parking position in the interactive interface is given to adjust the vehicle position.
Step S704, judging whether the matching degree of the automobile reaches the standard.
In this embodiment, the control algorithm on the server detects the matching degree between the preset parking position and the real-time position of the vehicle, determines whether the matching degree of the vehicle meets the standard, and if not, returns to step S702, and the user of the portable terminal, or the driver, performs fine adjustment of the parking position through the matching degree of the position displayed on the interactive interface or the auxiliary information provided on other interfaces, possibly repeatedly until the matching degree of the parking meets the standard.
The auxiliary information may be information for guiding movement of the vehicle, for example, information for moving left, right, or moving back and forth.
Step S705, the parking is ended.
When the control algorithm on the server detects that the matching degree between the preset parking position and the real-time position of the vehicle is high and exceeds a specific threshold value, a prompt signal is output to the portable terminal, the parking of the scrap steel vehicle is completed, and the parking is finished.
Optionally, after stopping, the special steel scrap unloading system begins to unload steel scraps, and after unloading one surface of the steel scraps, the camera captures a plurality of pictures, and the steel scraps in the steel scraps are positioned and classified through the steel scrap grading system. After the whole vehicle is finally unloaded, the grade judging system outputs a comprehensive judging result, and the grade judging result is more stable by determining a good parking position.
In the application, the camera is completely separated from the vehicle, and is arranged at a fixed position according to the requirement of a scrap steel grade judging system, for example, the camera is arranged on the side surface or the corner of a fixed parking space and is not related to the scrap steel vehicle; meanwhile, in order to ensure the safety of user information, the application supports displaying only a preset parking space and a mask (mask) of the current vehicle position output by an algorithm to prompt the driver of the matching degree of the current parking, thereby realizing the scene color map without displaying scrap steel on the terminal.
In the application, the interaction process, the function and the interface of the steel scrap vehicle parking position and the parking software or the small program are realized, wherein the interface also comprises a virtual frame of the preset parking position and a virtual frame representing the current vehicle position, and on the interaction interface, a color image acquired by a current camera can be displayed, and also only the virtual frame of the preset parking position and the virtual frame of the current vehicle real-time position can be displayed for real-time and low bandwidth, and meanwhile, the safety problem of the image data information of a user can be ensured; the camera is separated from the vehicle and is arranged at a preset position, so that the technical problems of long laying period and high cost are solved.
According to the application, the current region information of the first object is positioned in a real-time interaction mode, and when the current region information and the target region information have good matching degree, the second object borne by the first object can be detected, so that the detection of the second object can be more stable and accurate through a good stopping region, the technical effect of improving the detection efficiency of the object is realized, and the technical problem of low detection efficiency of the object is solved.
In another alternative embodiment, FIG. 10 illustrates in a block diagram one embodiment of a service grid using the computer terminal 10 (or mobile device) illustrated in FIG. 1, described above. Fig. 10 is a block diagram of a service grid of an object detection method according to an embodiment of the present invention, and as shown in fig. 10, the service grid 1000 is mainly used to facilitate secure and reliable communication between a plurality of micro services, where the micro services are implemented by decomposing an application program into a plurality of smaller services or instances and running on different clusters/machines.
As shown in fig. 10, the micro-service may include an application service instance a and an application service instance B, which form a functional application layer of the service grid 1000. In one embodiment, application service instance A runs in the form of a container/process 1008 on a machine/workload container group 1104 (POD) and application service instance B runs in the form of a container/process 1100 on a machine/workload container group 1106 (POD).
In one embodiment, application service instance a may be a commodity query service and application service instance B may be a commodity ordering service.
As shown in FIG. 10, application service instance A and grid agent (sidecar) 1003 coexist in machine workload container group 614 and application service instance B and grid agent 1005 coexist in machine workload container 114. Grid proxy 1003 and grid proxy 1005 form a data plane layer (DATA PLANE) of service grid 1000. Wherein grid agent 1003 and grid agent 1005 are in the form of container/process 1004, respectively, container/process 1004 may receive request 112 for conducting a commodity query service, grid agent 1006 is running, and grid agent 1003 and application service instance a may communicate bi-directionally, and grid agent 1005 and application service instance B may communicate bi-directionally. In addition, two-way communication is also possible between mesh agent 1003 and mesh agent 1005.
In one embodiment, all traffic for application service instance A is routed through grid proxy 1003 to the appropriate destination and all network traffic for application service instance B is routed through grid proxy 1005 to the appropriate destination. The network traffic mentioned herein includes, but is not limited to, hypertext transfer protocol (Hyper Text Transfer Protocol, abbreviated as HTTP), representational STATE TRANSFER (REST), high-performance, general-purpose open-source framework (gRPC), open-source in-memory data structure storage system (dis), and the like.
In one embodiment, the functionality of the extended data plane layer may be implemented by writing custom filters (filters) for agents (envoys) in the service grid 1000, which may be configured to enable the service grid to properly proxy service traffic for service interworking and service remediation. Grid proxy 1003 and grid proxy 1005 may be configured to perform at least one of the following functions: service discovery (service discovery), health checking (HEALTH CHECKING), routing (Routing), load Balancing (Load Balancing), authentication and authorization (authentication and authorization), and observability (observability).
As shown in fig. 10, the service grid 1000 also includes a control plane layer. Wherein the control plane layer may be a set of services running in a dedicated namespace, hosted by the hosting control plane component 1001 in a machine/workload container set (machine/Pod) 1002. As shown in fig. 10, managed control plane component 1001 is in bi-directional communication with grid agent 1003 and grid agent 1005. The managed control plane component 1001 is configured to perform some control management functions. For example, managed control plane component 1001 receives telemetry data transmitted by mesh agent 1003 and mesh agent 1005, which may be further aggregated. These services, hosting control plane component 1001 may also provide user-oriented Application Program Interfaces (APIs) to more easily manipulate network behavior, provide configuration data to grid agent 1003 and grid agent 1005, and the like. It should be noted that, for simplicity of description, the foregoing method embodiments are all described as a series of acts, but it should be understood by those skilled in the art that the present invention is not limited by the order of acts described, as some steps may be performed in other orders or concurrently in accordance with the present invention. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required for the present invention.
From the above description of the embodiments, it will be clear to those skilled in the art that the image processing method according to the above embodiments may be implemented by means of software plus necessary general hardware platform, but of course also by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present invention.
Example 3
According to an embodiment of the present invention, there is also provided an image processing apparatus for implementing the image processing method shown in fig. 3 described above.
Fig. 11 is a schematic diagram of an image processing apparatus according to an embodiment of the present invention. As shown in fig. 11, the image processing apparatus 1100 may include: a first display unit 1102, a first acquisition unit 1104, and a second display unit 1106.
The first display unit 1102 is configured to display, on the interactive interface, current area information and target area information associated with the first object, where the current area information is used to represent area information of a current area where the first object is located, and the target area information is used to represent area information of a target area where the first object needs to stop.
A first obtaining unit 1104, configured to obtain, based on the current area information and the target area information, a matching degree displayed on the interactive interface, where the matching degree is used to represent a matching degree between the current area and the target area.
The second display unit 1106 is configured to, in response to the matching degree being greater than the target threshold, display, on the interactive interface, first prompt information, where the first prompt information is used to prompt, when the first object is parked in the current area, to allow the detection device to detect a second object carried by the first object, and the detection device is disposed at a location associated with the target area.
Here, it should be noted that the first display unit 1102, the first acquisition unit 1104, and the second display unit 1106 correspond to steps S302 to S306 in embodiment 1, and the three units are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to those disclosed in the first embodiment. It should be noted that the above-described unit may be operated as a part of the apparatus in the computer terminal 10 provided in the first embodiment.
According to an embodiment of the present invention, there is also provided an image processing apparatus for implementing the image processing method shown in fig. 4 described above.
Fig. 12 is a schematic diagram of another image processing apparatus according to an embodiment of the present invention. As shown in fig. 12, the image processing apparatus 1200 may include: a second acquisition unit 1202, a determination unit 1204, and a transmission unit 1206.
A second obtaining unit 1202, configured to obtain current area information and target area information associated with the first object, where the current area information is used to represent area information of a current area where the first object is located, and the target area information is used to represent area information of a target area where the first object needs to stop.
And a determining unit 1204 configured to determine a degree of matching based on the current region information and the target region information, wherein the degree of matching is used to represent a degree of matching between the current region and the target region.
The sending unit 1206 is configured to send, to the client, first prompt information in response to the matching degree being greater than the target threshold, where the first prompt information is configured to prompt that, when the first object is parked in the current area, detection of a second object carried by the first object is allowed by the detection device, and the detection device is deployed at a location associated with the target area.
Here, it should be noted that the second acquiring unit 1202, the determining unit 1204, and the transmitting unit 1206 correspond to steps S402 to S406 in embodiment 1, and the three units are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to those disclosed in embodiment one. It should be noted that the above-described unit may be operated as a part of the apparatus in the computer terminal 10 provided in the first embodiment.
According to an embodiment of the present invention, there is also provided an image detection apparatus for a vehicle for implementing the image detection method for a vehicle shown in fig. 5 described above.
Fig. 13 is a schematic view of an image detection apparatus of a vehicle according to an embodiment of the present invention. As shown in fig. 13, the image detection apparatus 1300 of the vehicle may include: a third display unit 1302, a third acquisition unit 1304, and a fourth display unit 1306.
The third display unit 1302 is configured to display, on the interactive interface, current area information and target area information associated with the vehicle, where the current area information is used to represent area information of a current area where the vehicle is located, and the target area information is used to represent area information of a target area where the vehicle needs to stop.
A third obtaining unit 1304, configured to obtain, based on the current area information and the target area information, a matching degree displayed on the interactive interface, where the matching degree is used to represent a matching degree between the current area and the target area.
A fourth display unit 1306 for displaying a first prompt message on the interactive interface in response to the matching degree being greater than the target threshold, wherein the first prompt message is used for prompting that the detection device is allowed to detect the industrial object carried by the vehicle when the vehicle is parked in the current area, and the detection device is deployed at the position associated with the target area
Here, it should be noted that the third display unit 1302, the third obtaining unit 1304, and the fourth display unit 1306 correspond to steps S502 to S506 in embodiment 1, and the three units are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to those disclosed in the first embodiment. It should be noted that the above-described unit may be operated as a part of the apparatus in the computer terminal 10 provided in the first embodiment.
In the object detection device of this embodiment, original network data to be transmitted by a client is acquired through a first acquisition unit; displaying current region information and target region information associated with a first object on an interactive interface through a first display unit, wherein the current region information is used for representing region information of a current region where the first object is located, and the target region information is used for representing region information of a target region to which the first object needs to stop; acquiring, by a first acquisition unit, a matching degree displayed on an interactive interface based on current region information and target region information, wherein the matching degree is used for representing the matching degree between the current region and the target region; and displaying first prompt information on the interactive interface in response to the matching degree being greater than the target threshold value through the second display unit, wherein the first prompt information is used for prompting that when the first object is parked in the current area, the detection equipment is allowed to detect a second object borne by the first object, and the detection equipment is deployed at a position associated with the target area. That is, the present area information of the first object is positioned in a real-time interaction manner, and when the present area information has a good matching degree with the target area information, the second object carried by the first object can be detected, so that the detection of the second object can be more stable and accurate through a good stop area, thereby realizing the technical effect of improving the detection efficiency of the object, and solving the technical problem of low detection efficiency of the object.
Example 4
Embodiments of the present application may provide an image processing system, which may include a server, a client, and the computer terminal may be any one of a group of computer terminals. Optionally, the image processing apparatus includes: the server is used for acquiring current area information and target area information associated with the first object, and determining the matching degree based on the current area information and the target area information, wherein the current area information is used for representing the area information of a current area where the first object is located, and the target area information is used for representing the area information of a target area where the first object needs to stop; the client is used for responding to the fact that the matching degree is larger than the target threshold value, displaying first prompt information, wherein the first prompt information is used for prompting that when the first object is parked in the current area, the detection device is allowed to detect a second object borne by the first object, and the detection device is deployed at a position associated with the target area.
In the embodiment of the application, current area information and target area information associated with a first object are acquired through a server, and the matching degree is determined based on the current area information and the target area information, wherein the current area information is used for indicating the area information of a current area where the first object is positioned, and the target area information is used for indicating the area information of a target area where the first object needs to stop; and displaying first prompt information by the client in response to the matching degree being greater than the target threshold, wherein the first prompt information is used for prompting that when the first object is parked in the current area, the detection equipment is allowed to detect a second object borne by the first object, and the detection equipment is deployed at a position associated with the target area. That is, the present area information of the first object is positioned in a real-time interaction manner, and when the present area information has a good matching degree with the target area information, the second object carried by the first object can be detected, so that the detection of the second object can be more stable and accurate through a good stop area, thereby realizing the technical effect of improving the detection efficiency of the object, and solving the technical problem of low detection efficiency of the object.
Example 5
Embodiments of the present application may provide a computer terminal, which may be any one of a group of computer terminals. Alternatively, in the present embodiment, the above-described computer terminal may be replaced with a terminal device such as a mobile terminal.
Alternatively, in this embodiment, the above-mentioned computer terminal may be located in at least one network device among a plurality of network devices of the computer network.
In this embodiment, the above-mentioned computer terminal may execute the program code of the following steps in the resource allocation method of the application program: displaying current area information and target area information associated with a first object on an interactive interface, wherein the current area information is used for representing area information of a current area where the first object is located, and the target area information is used for representing area information of a target area where the first object needs to stop; acquiring the matching degree displayed on the interactive interface based on the current region information and the target region information, wherein the matching degree is used for representing the matching degree between the current region and the target region; and in response to the matching degree being greater than the target threshold, displaying first prompt information on the interactive interface, wherein the first prompt information is used for prompting that when the first object is parked in the current area, the detection equipment is allowed to detect a second object carried by the first object, and the detection equipment is deployed at a position associated with the target area.
The processor may call the information and the application program stored in the memory through the transmission device to perform the following steps: displaying current area information and target area information associated with a first object on an interactive interface, wherein the current area information is used for representing area information of a current area where the first object is located, and the target area information is used for representing area information of a target area where the first object needs to stop; acquiring the matching degree displayed on the interactive interface based on the current region information and the target region information, wherein the matching degree is used for representing the matching degree between the current region and the target region; and in response to the matching degree being greater than the target threshold, displaying first prompt information on the interactive interface, wherein the first prompt information is used for prompting that when the first object is parked in the current area, the detection equipment is allowed to detect a second object carried by the first object, and the detection equipment is deployed at a position associated with the target area.
Optionally, the above processor may further execute program code for: and acquiring the matching degree from the server, wherein the matching degree is determined by the server based on the current region information and the target region information.
Optionally, the above processor may further execute program code for: and responding to the matching degree not greater than the target threshold, and displaying second prompt information on the interactive interface, wherein the second prompt information is used for prompting and adjusting the current region information.
Optionally, the above processor may further execute program code for: and acquiring the adjusted matching degree displayed on the interactive interface based on the target area information and the adjusted current area information.
Optionally, the above processor may further execute program code for: and displaying the movement information of the first object on the interactive interface, wherein the movement information is used for representing the movement state of the first object and adjusting the current area information with the matching degree.
Optionally, the above processor may further execute program code for: the current region information and the target region information are displayed on the interactive interface, and information other than the current region information and the target region information is prohibited from being displayed.
As an alternative example, the processor may call the information stored in the memory and the application program through the transmission device to perform the following steps: acquiring current region information and target region information associated with a first object, wherein the current region information is used for representing region information of a current region where the first object is located, and the target region information is used for representing region information of a target region to which the first object needs to stop; determining a matching degree based on the current region information and the target region information, wherein the matching degree is used for representing the matching degree between the current region and the target region; and sending first prompt information to the client in response to the matching degree being greater than the target threshold, wherein the first prompt information is used for prompting that when the first object is parked in the current area, the detection equipment is allowed to detect a second object borne by the first object, and the detection equipment is deployed at a position associated with the target area.
Optionally, the above processor may further execute program code for: acquiring an original image of a first object acquired by image acquisition equipment, wherein the image acquisition equipment is arranged at a position associated with a target area; determining mask information of the first object based on the original image, wherein the mask information is used for representing the current position of the first object; the current region information is determined based on the mask information.
Optionally, the above processor may further execute program code for: in response to detecting the first object, outputting a control instruction to the image acquisition device, wherein the image acquisition device is deployed at a position associated with the target area; target area information acquired by the image acquisition apparatus in response to the control instruction is acquired.
Optionally, the above processor may further execute program code for: and transmitting the current region information and the target region information to the client for display.
As an alternative example, the processor may call the information stored in the memory and the application program through the transmission device to perform the following steps: displaying current area information and target area information associated with the vehicle on an interactive interface, wherein the current area information is used for indicating the area information of a current area where the vehicle is located, and the target area information is used for indicating the area information of a target area where the vehicle needs to stop; acquiring the matching degree displayed on the interactive interface based on the current region information and the target region information, wherein the matching degree is used for representing the matching degree between the current region and the target region; and in response to the matching degree being greater than the target threshold, displaying first prompt information on the interactive interface, wherein the first prompt information is used for prompting that when the vehicle is parked in the current area, the detection equipment is allowed to detect the industrial object borne by the vehicle and is deployed at the position associated with the target area.
Optionally, the above processor may further execute program code for: the target area information is not deployed in the scene in which the vehicle is located.
Optionally, the above processor may further execute program code for: and displaying the mask of the current area on the interactive interface, wherein the mask is obtained by converting the information of the current area based on deep learning.
The embodiment of the invention provides an image processing method, which is used for positioning the current region information of a first object in a real-time interaction mode, and detecting a second object borne by the first object when the current region information and the target region information have good matching degree, so that the detection of the second object can be more stable and accurate through a good stop region, the technical effect of improving the detection efficiency of the object is realized, and the technical problem of low detection efficiency of the object is solved.
Those of ordinary skill in the art will appreciate that all or part of the steps in the various methods of the above embodiments may be implemented by a program for instructing a terminal device to execute in association with hardware, the program may be stored in a computer readable storage medium, and the storage medium may include: flash disk, read-Only Memory (ROM), random-access Memory (Random Access Memory, RAM), magnetic disk or optical disk, etc.
Example 6
Embodiments of the present invention also provide a computer-readable storage medium. Alternatively, in this embodiment, the computer-readable storage medium may be used to store the program code executed by the image processing method provided in the first embodiment.
Alternatively, in this embodiment, the above-mentioned computer-readable storage medium may be located in any one of the computer terminals in the computer terminal group in the computer network, or in any one of the mobile terminals in the mobile terminal group.
Optionally, in the present embodiment, the above-mentioned computer-readable storage medium is configured to store program code for performing the steps of: displaying current area information and target area information associated with a first object on an interactive interface, wherein the current area information is used for representing area information of a current area where the first object is located, and the target area information is used for representing area information of a target area where the first object needs to stop; acquiring the matching degree displayed on the interactive interface based on the current region information and the target region information, wherein the matching degree is used for representing the matching degree between the current region and the target region; and in response to the matching degree being greater than the target threshold, displaying first prompt information on the interactive interface, wherein the first prompt information is used for prompting that when the first object is parked in the current area, the detection equipment is allowed to detect a second object carried by the first object, and the detection equipment is deployed at a position associated with the target area.
Optionally, the above computer readable storage medium may further execute program code for: and acquiring the matching degree from the server, wherein the matching degree is determined by the server based on the current region information and the target region information.
Optionally, the above computer readable storage medium may further execute program code for: and responding to the matching degree not greater than the target threshold, and displaying second prompt information on the interactive interface, wherein the second prompt information is used for prompting and adjusting the current region information.
Optionally, the above computer readable storage medium may further execute program code for: and acquiring the adjusted matching degree displayed on the interactive interface based on the target area information and the adjusted current area information.
Optionally, the above computer readable storage medium may further execute program code for: and displaying the movement information of the first object on the interactive interface, wherein the movement information is used for representing the movement state of the first object and adjusting the current area information with the matching degree.
Optionally, the above computer readable storage medium may further execute program code for: the current region information and the target region information are displayed on the interactive interface, and information other than the current region information and the target region information is prohibited from being displayed.
As an alternative example, the computer readable storage medium is arranged to store program code for performing the steps of: acquiring current region information and target region information associated with a first object, wherein the current region information is used for representing region information of a current region where the first object is located, and the target region information is used for representing region information of a target region to which the first object needs to stop; determining a matching degree based on the current region information and the target region information, wherein the matching degree is used for representing the matching degree between the current region and the target region; and sending first prompt information to the client in response to the matching degree being greater than the target threshold, wherein the first prompt information is used for prompting that when the first object is parked in the current area, the detection equipment is allowed to detect a second object borne by the first object, and the detection equipment is deployed at a position associated with the target area.
Optionally, the above computer readable storage medium may further execute program code for: acquiring an original image of a first object acquired by image acquisition equipment, wherein the image acquisition equipment is arranged at a position associated with a target area; determining mask information of the first object based on the original image, wherein the mask information is used for representing the current position of the first object; the current region information is determined based on the mask information.
Optionally, the above computer readable storage medium may further execute program code for: in response to detecting the first object, outputting a control instruction to the image acquisition device, wherein the image acquisition device is deployed at a position associated with the target area; target area information acquired by the image acquisition apparatus in response to the control instruction is acquired.
Optionally, the above computer readable storage medium may further execute program code for: and transmitting the current region information and the target region information to the client for display.
As an alternative example, the computer readable storage medium is arranged to store program code for performing the steps of: displaying current area information and target area information associated with the vehicle on an interactive interface, wherein the current area information is used for indicating the area information of a current area where the vehicle is located, and the target area information is used for indicating the area information of a target area where the vehicle needs to stop; acquiring the matching degree displayed on the interactive interface based on the current region information and the target region information, wherein the matching degree is used for representing the matching degree between the current region and the target region; and in response to the matching degree being greater than the target threshold, displaying first prompt information on the interactive interface, wherein the first prompt information is used for prompting that when the vehicle is parked in the current area, the detection equipment is allowed to detect the industrial object borne by the vehicle and is deployed at the position associated with the target area.
Optionally, the above computer readable storage medium may further execute program code for: the target area information is not deployed in the scene in which the vehicle is located.
Optionally, displaying a mask of the current region on the interactive interface, wherein the mask is obtained by converting the current region information based on deep learning.
The foregoing embodiment numbers of the present invention are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
In the foregoing embodiments of the present invention, the descriptions of the embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed technology may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and are merely a logical functional division, and there may be other manners of dividing the apparatus in actual implementation, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a usb disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely a preferred embodiment of the present invention and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present invention, which are intended to be comprehended within the scope of the present invention.

Claims (15)

1. An image processing method, comprising:
Displaying current area information and target area information associated with a first object on an interactive interface, wherein the current area information is used for indicating the area information of a current area where the first object is located, and the target area information is used for indicating the area information of a target area where the first object needs to stop;
Acquiring a matching degree displayed on the interactive interface based on the current region information and the target region information, wherein the matching degree is used for representing the matching degree between the current region and the target region;
Responding to the matching degree being larger than a target threshold, displaying first prompt information on the interactive interface, wherein the first prompt information is used for prompting that when the first object is parked in the current area, detection equipment is allowed to detect a second object borne by the first object, and the detection equipment is deployed at a position associated with the target area;
Wherein the method further comprises: the current area information and the target area information which are associated with the first object from a server are acquired, wherein the current area information is obtained by determining the information of the first object by the server through a vehicle positioning algorithm, the target area information is obtained by imaging the first object by the server, and the information of the first object is obtained by scanning the first object.
2. The method of claim 1, wherein obtaining the degree of matching displayed on the interactive interface based on the current region information and the target region information comprises:
And acquiring the matching degree from the server, wherein the matching degree is determined by the server based on the current region information and the target region information.
3. The method according to claim 1, wherein the method further comprises:
and responding to the matching degree not larger than the target threshold, and displaying second prompt information on the interactive interface, wherein the second prompt information is used for prompting and adjusting the current region information.
4. The method of claim 3, wherein obtaining the degree of matching displayed on the interactive interface based on the current region information and the target region information comprises:
and acquiring the adjusted matching degree displayed on the interactive interface based on the target area information and the adjusted current area information.
5. The method according to claim 1, wherein the method further comprises:
and displaying the movement information of the first object on the interactive interface, wherein the movement information is used for representing the movement state of the first object and adjusting the current area information with the matching degree.
6. The method according to any one of claims 1 to 5, wherein displaying the current region information and the target region information associated with the first object on the interactive interface comprises:
and displaying the current region information and the target region information on the interactive interface, and prohibiting the display of information other than the current region information and the target region information.
7. An image processing method, comprising:
Acquiring current region information and target region information associated with a first object, wherein the current region information is used for representing region information of a current region where the first object is located, and the target region information is used for representing region information of a target region where the first object needs to stop;
determining a matching degree based on the current region information and the target region information, wherein the matching degree is used for representing the matching degree between the current region and the target region;
Responding to the matching degree being larger than a target threshold, sending first prompt information to a client, wherein the first prompt information is used for prompting that when the first object is parked in the current area, a detection device is allowed to detect a second object borne by the first object, and the detection device is deployed at a position associated with the target area;
Wherein acquiring the current region information and the target region information associated with the first object includes: acquiring information of the first object obtained after scanning the first object; and responding to the acquired information of the first object, obtaining the current area information associated with the first object by using a vehicle positioning algorithm, and imaging the first object to obtain the target area information associated with the first object.
8. The method of claim 7, wherein obtaining current region information associated with the first object comprises:
Acquiring an original image of the first object acquired by an image acquisition device, wherein the image acquisition device is deployed at a position associated with the target area;
determining mask information of the first object based on the original image, wherein the mask information is used for representing the current position of the first object;
the current region information is determined based on the mask information.
9. The method of claim 7, wherein obtaining target area information associated with the first object comprises:
In response to detecting the first object, outputting a control instruction to an image acquisition device, wherein the image acquisition device is deployed at a position associated with the target area;
The target area information acquired by the image acquisition device in response to the control instruction is acquired.
10. The method of claim 7, wherein the method further comprises:
and sending the current region information and the target region information to the client for display.
11. An image detection method of a vehicle, applied to a mobile terminal, comprising:
Displaying current area information and target area information associated with a vehicle on an interactive interface, wherein the current area information is used for indicating the area information of a current area where the vehicle is located, and the target area information is used for indicating the area information of a target area where the vehicle is required to stop;
Acquiring a matching degree displayed on the interactive interface based on the current region information and the target region information, wherein the matching degree is used for representing the matching degree between the current region and the target region;
Responding to the matching degree being larger than a target threshold value, displaying first prompt information on the interactive interface, wherein the first prompt information is used for prompting that when the vehicle is parked in the current area, the detection equipment is allowed to detect an industrial object borne by the vehicle, and is deployed on a position associated with the target area;
wherein the method further comprises: the current area information and the target area information which are associated with the vehicle from a server are acquired, wherein the current area information is obtained by determining the information of the vehicle by the server through a vehicle positioning algorithm, the target area information is obtained by imaging the vehicle by the server, and the information of the vehicle is obtained by scanning the vehicle.
12. The method of claim 11, wherein the target area information is not deployed in a scene in which the vehicle is located.
13. The method of claim 11, wherein the method further comprises:
and displaying a mask of the current region on the interactive interface, wherein the mask is obtained by converting the current region information based on deep learning.
14. An image processing system, comprising:
The server is used for acquiring current area information and target area information associated with a first object, and determining matching degree based on the current area information and the target area information, wherein the current area information is used for representing area information of a current area where the first object is located, and the target area information is used for representing area information of a target area where the first object needs to stop;
The client is used for responding to the fact that the matching degree is larger than a target threshold value, displaying first prompt information, wherein the first prompt information is used for prompting that when the first object is parked in the current area, detection equipment is allowed to detect a second object borne by the first object, and the detection equipment is deployed at a position associated with the target area;
The client is further configured to: the current area information and the target area information which are associated with the first object from the server are acquired, wherein the current area information is obtained by determining the information of the first object by the server through a vehicle positioning algorithm, the target area information is obtained by imaging the first object by the server, and the information of the first object is obtained by scanning the first object.
15. A computer readable storage medium, characterized in that the computer readable storage medium comprises a stored program, wherein the program, when run by a processor, controls a device in which the computer readable storage medium is located to perform the method of any one of claims 1 to 13.
CN202210356175.0A 2022-04-06 2022-04-06 Image processing method, system and storage medium Active CN114860359B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210356175.0A CN114860359B (en) 2022-04-06 2022-04-06 Image processing method, system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210356175.0A CN114860359B (en) 2022-04-06 2022-04-06 Image processing method, system and storage medium

Publications (2)

Publication Number Publication Date
CN114860359A CN114860359A (en) 2022-08-05
CN114860359B true CN114860359B (en) 2024-05-14

Family

ID=82629232

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210356175.0A Active CN114860359B (en) 2022-04-06 2022-04-06 Image processing method, system and storage medium

Country Status (1)

Country Link
CN (1) CN114860359B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106935071A (en) * 2017-04-12 2017-07-07 深圳市金立通信设备有限公司 A kind of method for assisting in parking and mobile terminal
CN112348894A (en) * 2020-11-03 2021-02-09 中冶赛迪重庆信息技术有限公司 Method, system, equipment and medium for identifying position and state of scrap steel truck
CN112801391A (en) * 2021-02-04 2021-05-14 科大智能物联技术有限公司 Artificial intelligent scrap steel impurity deduction rating method and system
CN112863232A (en) * 2020-12-31 2021-05-28 深圳市金溢科技股份有限公司 Parking lot dynamic partition parking method and parking lot server
CN113743210A (en) * 2021-07-30 2021-12-03 阿里巴巴达摩院(杭州)科技有限公司 Image recognition method and scrap grade recognition method
CN113807228A (en) * 2021-09-10 2021-12-17 北京精英路通科技有限公司 Parking event prompting method and device, electronic equipment and storage medium
CN114189629A (en) * 2021-12-06 2022-03-15 用友网络科技股份有限公司 Image acquisition method, image acquisition device and intelligent scrap steel grading system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106935071A (en) * 2017-04-12 2017-07-07 深圳市金立通信设备有限公司 A kind of method for assisting in parking and mobile terminal
CN112348894A (en) * 2020-11-03 2021-02-09 中冶赛迪重庆信息技术有限公司 Method, system, equipment and medium for identifying position and state of scrap steel truck
CN112863232A (en) * 2020-12-31 2021-05-28 深圳市金溢科技股份有限公司 Parking lot dynamic partition parking method and parking lot server
CN112801391A (en) * 2021-02-04 2021-05-14 科大智能物联技术有限公司 Artificial intelligent scrap steel impurity deduction rating method and system
CN113743210A (en) * 2021-07-30 2021-12-03 阿里巴巴达摩院(杭州)科技有限公司 Image recognition method and scrap grade recognition method
CN113807228A (en) * 2021-09-10 2021-12-17 北京精英路通科技有限公司 Parking event prompting method and device, electronic equipment and storage medium
CN114189629A (en) * 2021-12-06 2022-03-15 用友网络科技股份有限公司 Image acquisition method, image acquisition device and intelligent scrap steel grading system

Also Published As

Publication number Publication date
CN114860359A (en) 2022-08-05

Similar Documents

Publication Publication Date Title
CN103366601B (en) The apparatus and method that parking spot is set based on panoramic view picture
RU2652452C2 (en) Device and method for network status information representation
KR20200044196A (en) Apparatus, method and system for controlling parking of vehicle
WO2020238284A1 (en) Parking space detection method and apparatus, and electronic device
CN105026212A (en) Fault tolerant display
US20190318546A1 (en) Method and apparatus for processing display data
KR20120086795A (en) Augmented reality system and method that share augmented reality service to remote
JP6882868B2 (en) Image processing equipment, image processing method, system
EP3937129A1 (en) Image processing method and related apparatus
EP3754449A1 (en) Vehicle control method, related device, and computer storage medium
EP3712782B1 (en) Diagnosis processing apparatus, diagnosis system, and diagnosis processing method
EP4050892A1 (en) Work assist server, work assist method, and work assist system
CN113221756A (en) Traffic sign detection method and related equipment
JP2019091169A (en) Image processor, method of controlling image processor, and program
CN114860359B (en) Image processing method, system and storage medium
DE102018133030A1 (en) VEHICLE REMOTE CONTROL DEVICE AND VEHICLE REMOTE CONTROL METHOD
CN113721876A (en) Screen projection processing method and related equipment
CN104104902A (en) Holder direction fault detection method and device
CN116533987A (en) Parking path determination method, device, equipment and automatic driving vehicle
CN114286011B (en) Focusing method and device
Castro et al. A prototype of a car parking management service based on wireless sensor networks for its
KR20130047439A (en) Apparatus and method for collecting information
CN114187172A (en) Image fusion method and device, computer equipment and computer readable storage medium
KR101871941B1 (en) Camrea operation method, camera, and surveillance system
CN114283604B (en) Method for assisting in parking a vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant