CN113870226A - Method and device for detecting compliance of additive, electronic equipment and storage medium - Google Patents

Method and device for detecting compliance of additive, electronic equipment and storage medium Download PDF

Info

Publication number
CN113870226A
CN113870226A CN202111151152.8A CN202111151152A CN113870226A CN 113870226 A CN113870226 A CN 113870226A CN 202111151152 A CN202111151152 A CN 202111151152A CN 113870226 A CN113870226 A CN 113870226A
Authority
CN
China
Prior art keywords
image
target
value
component
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111151152.8A
Other languages
Chinese (zh)
Other versions
CN113870226B (en
Inventor
王小刚
崔大勇
余程鹏
朱文和
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Leading Technology Co Ltd
Original Assignee
Nanjing Leading Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Leading Technology Co Ltd filed Critical Nanjing Leading Technology Co Ltd
Priority to CN202111151152.8A priority Critical patent/CN113870226B/en
Publication of CN113870226A publication Critical patent/CN113870226A/en
Application granted granted Critical
Publication of CN113870226B publication Critical patent/CN113870226B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The application discloses a method and a device for detecting the compliance of an additive, an electronic device and a storage medium, which belong to the technical field of image processing, and the method comprises the following steps: the method comprises the steps of obtaining at least one first image of a target object, comparing each first image with a second image in a corresponding image collection direction collected when the target object does not have a target additive to determine whether the target object has the additive in a corresponding area, and analyzing whether the target additive is in compliance based on area information of the target additive if the target object has the target additive in the corresponding area. Therefore, whether the target object has the target attachment or not is automatically judged, the target attachment is judged to be in compliance when the target object has the target attachment, manual checking is not needed any more, the compliance detection efficiency of the target attachment is high, and the labor cost is low.

Description

Method and device for detecting compliance of additive, electronic equipment and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method and an apparatus for compliance detection of an add-on, an electronic device, and a storage medium.
Background
At present, there are many scenes that need to add additional objects to the subject, such as installing danger signs on vehicles transporting dangerous goods, posting advertisements on internet appointment vehicles, and the like.
Generally, there are some requirements for adding the additive on the subject, such as the placement position of the additive, the size of the additive, and the like. In the related art, the inspection is performed by special workers, so that the inspection efficiency is low, and the labor cost is high.
Disclosure of Invention
The embodiment of the application provides an attachment compliance detection method, an attachment compliance detection device, electronic equipment and a storage medium, and aims to solve the problems of low efficiency and high cost caused by manual inspection of whether an attachment is in compliance in the related art.
In a first aspect, an embodiment of the present application provides a compliance detection method for an add-on, including:
acquiring at least one first image of a target object;
comparing each first image with a second image in a corresponding image acquisition orientation acquired when the target object has no target attachment to determine whether the target object has the target attachment in a corresponding area;
and if the target object has the target additional object on the corresponding area, analyzing whether the target additional object on the corresponding area of the target object is in compliance or not based on the area information of the target additional object.
In some embodiments, comparing each first image to a second image at a corresponding image capture orientation captured when the target object has no target appendage to determine whether the target object has a target appendage on a corresponding region comprises:
performing color space conversion on the first image to obtain component values of the first image on each color component in an appointed color space;
performing color component difference analysis based on the component value of the first image on each color component and the component value of the second image on the color component to obtain a component difference image corresponding to the color component;
performing color difference analysis based on the component difference images corresponding to the color components to obtain color difference images;
and performing additive detection based on the color difference image to determine whether the target object has a target additive on the corresponding area.
In some embodiments, performing color component difference analysis based on the component value of the first image on each color component and the component value of the second image on the color component to obtain a component difference image corresponding to the color component includes:
comparing a first component value of each pixel in the first image on each color component with a second component value of a corresponding pixel in the second image on the color component;
if the absolute value of the difference value between the first component value and the second component value is greater than the preset threshold value corresponding to the color component, marking the pixel value of the corresponding pixel in the component difference image corresponding to the color component as a first preset value; and if the absolute value of the difference value between the first component value and the second component value is not greater than the preset threshold value, marking the pixel value of the pixel as a second preset value.
In some embodiments, performing a color difference analysis based on the component difference image corresponding to each color component to obtain a color difference image includes:
performing dot product operation on pixel values corresponding to the same pixel in each component difference image to obtain a dot product image, determining the sum of the pixel values in an area with a specified area size and taking the pixel at the position (i, j) as the center in the dot product image aiming at the pixel (i, j) in the first image, if the sum of the pixel values is larger than a first set value, marking the pixel value at the position (i, j) in the color difference image as a third preset value, and if the sum of the pixel values is not larger than the first set value, marking the pixel value at the position (i, j) in the color difference image as a fourth preset value, wherein i and j are positive integers; or
And for the pixel (i, j) in the first image, determining the number of pixels taking the value as a first preset value in an area with the pixel at the position (i, j) as the center in the component difference image corresponding to each color component, determining the sum of the number of the pixels, if the sum of the number of the pixels is greater than a second set value, marking the pixel value at the position (i, j) in the color difference image as a third preset value, and if the sum of the number of the pixels is not greater than the second set value, marking the pixel value at the position (i, j) in the color difference image as a fourth preset value.
In some embodiments, analyzing whether the target object is in compliance with the target attachment on the corresponding region based on the region information of the target attachment includes:
based on the position conversion relation between the first image and the second image, converting the area information of the target attachment to obtain a target area;
if the target area is located in the designated area in the second image, determining an additional position compliance of the target additional object, and/or if the size of the target area meets a preset size requirement, determining a size compliance of the target additional object.
In some embodiments, analyzing whether the target object is in compliance with the target attachment on the corresponding region based on the region information of the target attachment includes:
cutting the first image based on the area information of the target attachment to obtain a sub-image;
determining content compliance of the target add-on if the sub-image is included in the saved image of the compliance add-on; or; and if the sub-image does not contain characters and/or images prohibited to be used, determining the content compliance of the target additional object.
In some embodiments, comparing each first image with a second image at a corresponding image capture orientation captured when the target object has no target appendage to determine whether the target object is preceded by a target appendage at a corresponding region further comprises:
determining that the first image meets a preset detection condition, wherein the preset detection condition comprises any combination of the following conditions:
the time difference between the acquisition time of the first image and the time of indicating image acquisition on the target object is less than the preset time length;
the distance between the acquisition position of the first image and the position of the target object when the target object is indicated to be subjected to image acquisition is smaller than a preset distance;
the first image meets a preset integrity requirement.
In some embodiments, further comprising:
and if the target object does not have the target additional object on the corresponding area, analyzing whether the target object lacks the target additional object on the corresponding area or not based on a preset additional requirement.
In a second aspect, an embodiment of the present application provides an add-on compliance detection device, including:
an acquisition module for acquiring at least one first image of a target object;
the determining module is used for comparing each first image with a second image in a corresponding image acquisition direction acquired when the target object has no target additive to determine whether the target object has the target additive in a corresponding area;
and the analysis module is used for analyzing whether the target additional object on the corresponding area of the target object is in compliance or not based on the area information of the target additional object if the target object has the target additional object on the corresponding area.
In some embodiments, the determining module is specifically configured to:
performing color space conversion on the first image to obtain component values of the first image on each color component in an appointed color space;
performing color component difference analysis based on the component value of the first image on each color component and the component value of the second image on the color component to obtain a component difference image corresponding to the color component;
performing color difference analysis based on the component difference images corresponding to the color components to obtain color difference images;
and performing additive detection based on the color difference image to determine whether the target object has a target additive on the corresponding area.
In some embodiments, the determining module is specifically configured to:
comparing a first component value of each pixel in the first image on each color component with a second component value of a corresponding pixel in the second image on the color component;
if the absolute value of the difference value between the first component value and the second component value is greater than the preset threshold value corresponding to the color component, marking the pixel value of the corresponding pixel in the component difference image corresponding to the color component as a first preset value; and if the absolute value of the difference value between the first component value and the second component value is not greater than the preset threshold value, marking the pixel value of the pixel as a second preset value.
In some embodiments, the determining module is specifically configured to:
performing dot product operation on pixel values corresponding to the same pixel in each component difference image to obtain a dot product image, determining the sum of the pixel values in an area with a specified area size and taking the pixel at the position (i, j) as the center in the dot product image aiming at the pixel (i, j) in the first image, if the sum of the pixel values is larger than a first set value, marking the pixel value at the position (i, j) in the color difference image as a third preset value, and if the sum of the pixel values is not larger than the first set value, marking the pixel value at the position (i, j) in the color difference image as a fourth preset value, wherein i and j are positive integers; or
And for the pixel (i, j) in the first image, determining the number of pixels taking the value as a first preset value in an area with the pixel at the position (i, j) as the center in the component difference image corresponding to each color component, determining the sum of the number of the pixels, if the sum of the number of the pixels is greater than a second set value, marking the pixel value at the position (i, j) in the color difference image as a third preset value, and if the sum of the number of the pixels is not greater than the second set value, marking the pixel value at the position (i, j) in the color difference image as a fourth preset value.
In some embodiments, the analysis module is specifically configured to:
based on the position conversion relation between the first image and the second image, converting the area information of the target attachment to obtain a target area;
if the target area is located in the designated area in the second image, determining an additional position compliance of the target additional object, and/or if the size of the target area meets a preset size requirement, determining a size compliance of the target additional object.
In some embodiments, the analysis module is specifically configured to:
cutting the first image based on the area information of the target attachment to obtain a sub-image;
determining content compliance of the target add-on if the sub-image is included in the saved image of the compliance add-on; or; and if the sub-image does not contain characters and/or images prohibited to be used, determining the content compliance of the target additional object.
In some embodiments, further comprising a verification module for:
before comparing each first image with a second image in a corresponding image acquisition orientation acquired when the target object has no target attachment to determine whether the target object has the target attachment on a corresponding area, determining that the first image meets a preset detection condition, wherein the preset detection condition comprises any combination of the following conditions:
the time difference between the acquisition time of the first image and the time of indicating image acquisition on the target object is less than the preset time length;
the distance between the acquisition position of the first image and the position of the target object when the target object is indicated to be subjected to image acquisition is smaller than a preset distance;
the first image meets a preset integrity requirement.
In some embodiments, the analysis module is further to:
and if the target object does not have the target additional object on the corresponding area, analyzing whether the target object lacks the target additional object on the corresponding area or not based on a preset additional requirement.
In a third aspect, an embodiment of the present application provides an electronic device, including: at least one processor, and a memory communicatively coupled to the at least one processor, wherein:
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the compliance detection method of the add-on.
In a fourth aspect, embodiments of the present application provide a storage medium, where instructions are executed by a processor of an electronic device, and the electronic device is capable of executing the compliance detection method of the above-mentioned add-on.
In the embodiment of the application, at least one first image of a target object is obtained, each first image is compared with a second image in a corresponding image acquisition direction acquired when the target object does not have a target attachment, whether the target object has the attachment on a corresponding area or not is determined, and if the target object has the target attachment on the corresponding area, whether the target attachment is in compliance or not is analyzed based on area information of the target attachment. Therefore, whether the target object has the target attachment or not is automatically judged, the target attachment is judged to be in compliance when the target object has the target attachment, manual checking is not needed any more, the compliance detection efficiency of the target attachment is high, and the labor cost is low.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a schematic view of an application scenario of a compliance detection method for an add-on provided in an embodiment of the present application;
FIG. 2 is a flow chart of a compliance detection method for an add-on provided by an embodiment of the present application;
fig. 3 is a flowchart of a method for determining whether a target object has a target attachment on a corresponding area according to an embodiment of the present disclosure;
fig. 4a is a schematic diagram of a hue difference image provided in an embodiment of the present application;
FIG. 4b is a schematic diagram of a saturation difference image according to an embodiment of the present application;
fig. 4c is a schematic diagram of a luminance difference image according to an embodiment of the present disclosure;
FIG. 4d is a schematic diagram of a dot product diagram provided by an embodiment of the present application;
FIG. 5 is a schematic interaction flow chart illustrating compliance detection of an advertisement on a vehicle according to an embodiment of the present application;
FIG. 6 is a schematic view of an orientation of a vehicle during image acquisition according to an embodiment of the present disclosure;
FIG. 7 is a schematic diagram of a rectangular frame of an advertisement posted on the right side of a vehicle according to an embodiment of the present application;
FIG. 8 is a schematic diagram of a compliance testing device incorporating an embodiment of the present application;
fig. 9 is a hardware configuration diagram of an electronic device for implementing a compliance detection method of an add-on according to an embodiment of the present application.
Detailed Description
In order to solve the problems of low efficiency and high cost in the prior art that whether an attachment is in compliance is manually checked, embodiments of the present application provide a method and an apparatus for detecting compliance of an attachment, an electronic device, and a storage medium.
The preferred embodiments of the present application will be described below with reference to the accompanying drawings of the specification, it should be understood that the preferred embodiments described herein are merely for illustrating and explaining the present application, and are not intended to limit the present application, and that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
To facilitate understanding of the present application, the present application refers to technical terms in which:
a target attachment, an object attached to a target object in some form, such as an advertisement posted on a vehicle, a sign mounted on a vehicle, and the like.
The color space is a color description mode, and may have a plurality of definition modes, such as an RGB color space, an HSV color space, and the like, wherein the RGB color space is defined based on an object light-emitting principle, and three primary colors of RGB corresponding light are: red, Green, Blue, the RGB value of a pixel is the component value of the pixel on each color component in the RGB color space; the HSV color space is proposed from visual reflection, H is Hue, S is Saturation, V is brightness, and the Hue, Saturation and brightness values of a pixel are the component values of the pixel on each color component in the HSV color space. The different color spaces can be mutually converted.
Fig. 1 is an application scenario diagram of a compliance detection method for an add-on provided in an embodiment of the present application, and includes a terminal 100 and a server 200.
Terminal 100 can be any intelligent equipment such as smart mobile phone, panel computer, portable personal computer, and various APPs can be installed on terminal 100, for example, the APP that installs on terminal 100 in the embodiment of this application can be the APP that provides the net car appointment order service, and the net car appointment driver can accept the net car appointment order through this APP, also can upload the image of net car appointment through this APP.
The server 200 can provide various network services for the terminal 100, and for different applications on the terminal 100, the server 200 may be considered as a background server providing corresponding network services, for example, in this embodiment of the application, the server 200 may receive an order receiving request uploaded by the terminal 100, and dispatch a network appointment order to the terminal 100, and for example, the server 200 may send a vehicle body image uploading instruction to the terminal 100, and after receiving an image sent by the terminal 100, check whether a corresponding vehicle has target additional objects such as advertisements and signs, and may check whether the target additional objects on the vehicle are compliant.
The server 200 may be a server, a server cluster formed by a plurality of servers, or a cloud computing center.
Specifically, the server 200 may include a processor 210 (CPU), a memory 220, an input device 230, an output device 240, and the like, the input device 230 may include a keyboard, a mouse, a touch screen, and the like, and the output device 240 may include a Display device, such as a Liquid Crystal Display (LCD), a Cathode Ray Tube (CRT), and the like.
Memory 220 may include Read Only Memory (ROM) and Random Access Memory (RAM), and provides processor 210 with program instructions and data stored in memory 220. In the present embodiment, the memory 220 may be used to store a program of a compliance detection method of any of the additions in the present embodiment.
The processor 210 is configured to execute the steps of the compliance detection method of any of the additions in the embodiment of the present application according to the obtained program instructions by calling the program instructions stored in the memory 220.
The terminals 100 and the server 200 are connected via the internet to communicate with each other. Optionally, the internet described above uses standard communication techniques and/or protocols. The internet is typically the internet, but can be any Network including, but not limited to, a Local Area Network (LAN), a Metropolitan Area Network (MAN), a Wide Area Network (WAN), any combination of mobile, wireline or wireless networks, private or virtual private networks. In some embodiments, data exchanged over a network is represented using techniques and/or formats including Hypertext Mark-up Language (HTML), Extensible Markup Language (XML), and the like. All or some of the links may also be encrypted using conventional encryption techniques such as Secure Socket Layer (SSL), Transport Layer Security (TLS), Virtual Private Network (VPN), Internet Protocol Security (IPsec). In other embodiments, custom and/or dedicated data communication techniques may also be used in place of, or in addition to, the data communication techniques described above.
It should be noted that the application architecture diagram in the embodiment of the present application is for more clearly illustrating the technical solution in the embodiment of the present application, and does not limit the technical solution provided in the embodiment of the present application, and is not limited to short videos, and the technical solution provided in the embodiment of the present application is also applicable to similar problems for other application architectures and business applications.
Fig. 2 is a flowchart of a compliance detection method of an add-on provided by an embodiment of the present application, the method is applied to the server in fig. 1, and the method includes the following steps.
In step S201, at least one first image of a target object is acquired.
Typically, each first image is an RGB image.
In order to more comprehensively perform compliance detection on the target attachment on the target object, images of the target object in different orientations may be acquired, and thus, the number of the first images may be one, two, three or more, and one first image may be acquired in one orientation.
In step S202, each first image is compared with a second image in a corresponding image capturing orientation captured when the target object has no target attachment to determine whether the target object has a target attachment on the corresponding area.
Each first image is an image of a currently acquired target object, and the second image corresponding to each first image is an image acquired in the same image acquisition direction acquired when the target object has no target attachment, so that whether the target object has the target attachment in the corresponding area can be determined by comparing each first image with the second image in the corresponding image acquisition direction.
In specific implementation, it may be determined whether the target object has the target attachment on the corresponding area according to a process shown in fig. 3, where the process includes the following steps:
s301 a: and performing color space conversion on the first image to obtain component values of the first image on each color component in the designated color space.
For example, the first image is converted from an RGB color space to an HSV color space, and a hue value, a saturation value, and a brightness value of the first image in the HSV color space are obtained, where the hue value, the saturation value, and the brightness value are component values of each color component of the first image in the HSV color space.
S302 a: and performing color component difference analysis based on the component value of the first image on each color component and the component value of the second image on the color component to obtain a component difference image corresponding to the color component.
In specific implementation, a first component value of each pixel in the first image on each color component may be compared with a second component value of a corresponding pixel in the second image on the color component, and if an absolute value of a difference between the first component value and the second component value is greater than a preset threshold corresponding to the color component, a pixel value of the corresponding pixel in the component difference image corresponding to the color component may be marked as a first preset value; if the absolute value of the difference between the first component value and the second component value is not greater than the preset threshold, the pixel value of the corresponding pixel can be marked as a second preset value. The first preset value is, for example, 1, the second preset value is, for example, 0, and the first preset value is, for example, 2, and the second preset value is 1.
It should be noted that the pixels in the first image and the second image are in a one-to-one correspondence, and the pixels in each component difference image obtained based on the first image and the second image are also in a one-to-one correspondence with the pixels in the first image.
Taking HSV color space as an example, the above process is:
comparing the tone value of each pixel in the first image with the tone value of the corresponding pixel in the second image, if the absolute value of the tone difference value between the two is greater than the tone threshold, the pixel value of the corresponding pixel in the tone difference image (i.e. the component difference image corresponding to the tone component) can be marked as a first preset value, and if the absolute value of the tone difference value between the two is not greater than the tone threshold, the pixel value of the corresponding pixel in the tone difference image can be marked as a second preset value.
The saturation value of each pixel in the first image is compared with the saturation value of the corresponding pixel in the second image, if the absolute value of the saturation difference between the two is greater than the saturation threshold, the pixel value of the corresponding pixel in the saturation difference image (i.e. the component difference image corresponding to the saturation component) can be marked as a first preset value, and if the absolute value of the saturation difference between the two is not greater than the hue threshold, the pixel value of the corresponding pixel in the saturation difference image can be marked as a second preset value.
The brightness value of each pixel in the first image is compared with the brightness value of the corresponding pixel in the second image, if the absolute value of the brightness difference between the two is greater than the brightness threshold, the pixel value of the corresponding pixel in the brightness difference image (i.e. the component difference image corresponding to the brightness component) can be marked as a first preset value, and if the absolute value of the brightness difference between the two is not greater than the hue threshold, the pixel value of the corresponding pixel in the brightness difference image can be marked as a second preset value.
The first image is identical in size to the hue difference image, the saturation difference image, and the luminance difference image, and each pixel in the first image corresponds to a pixel in the hue difference image, the saturation difference image, and the luminance difference image.
S303 a: and performing color difference analysis based on the component difference images corresponding to the color components to obtain color difference images.
Considering that the target objects such as advertisement and marker occupy a continuous area, the non-target objects such as mud spot and gray spot are relatively dispersed. In order to accurately find the target additive, the value taking condition of the pixels in the component difference image corresponding to each color component can be comprehensively considered to distinguish the pixels belonging to the target additive from the pixels not belonging to the target additive in the first image.
In some embodiments, the pixel values of all pixels in each component difference image are represented by 0 and 1 (1 is a first preset value, and 0 is a second preset value). In this case, a dot product operation may be performed on pixel values corresponding to the same pixel in each component difference image to obtain a dot product map, then, for a pixel (i, j) located in the ith row and the jth column in the first image, a sum of pixel values in a region of a specified region size centered on the pixel at (i, j) in the dot product map is determined, if the sum of pixel values is greater than a first set value, the pixel value at (i, j) in the color difference image is marked as a third preset value, and if the sum of pixel values is not greater than the first set value, the pixel value at (i, j) in the color difference image is marked as a fourth preset value. Wherein the third preset value is, for example, 1, and the fourth preset value is, for example, 0.
Assuming that the size of the first image is 4 × 4, and assuming that the hue difference image corresponding to the first image is shown in fig. 4a, the saturation difference image corresponding to the first image is shown in fig. 4b, and the luminance difference image corresponding to the first image is shown in fig. 4c, the dot product of the pixel values corresponding to the same pixel in the hue difference image, the saturation difference image, and the luminance difference image is shown in fig. 4 d.
Assuming that the specified region size is R × C, for each pixel (i, j) in the first image, a window size of R × C may be used to act on the pixel at (i, j) in fig. 4d, then a sum of pixel values in the region covered by the window size of R × C is determined, and if the sum of pixel values is greater than a first set value, such as 0.5 × R × C, the pixel value at (i, j) in the color difference image may be labeled as 1, and if the sum of pixel values is not greater than (R × C)/2, the pixel value at (i, j) in the color difference image may be labeled as 0.
It should be noted that, for pixels located on the boundary in the first image, for example, pixels located in the first row and the first column, there may be a case where there are no pixels in the area covered by the window size of R × C in the dot product map, and in this case, the area without pixels in the dot product map may be filled with 0.
In some embodiments, for a pixel (i, j) located in the ith row and the jth column in the first image, the number of pixels, which takes a value of a first preset value, in an area of a specified area size centered on the pixel at (i, j) in the component difference image corresponding to each color component may be determined, the sum of the numbers of pixels may be determined, if the sum of the numbers of pixels is greater than a second set value, the pixel value at (i, j) in the color difference image may be marked as a third preset value, and if the sum of the numbers of pixels is not greater than the second set value, the pixel value at (i, j) in the color difference image may be marked as a fourth preset value.
Still assuming that the designated area size is R × C, for each pixel (i, j) in the first image, the number of pixels N1 taking the first preset value in the area covered by the window size of R × C in fig. 4a is determined by using the window size of R × C to act on the pixel at (i, j) in fig. 4b, the number of pixels N2 taking the first preset value in the area covered by the window size of R × C in fig. 4b is determined by using the window size of R × C to act on the pixel at (i, j) in fig. 4C, the number of pixels N3 taking the first preset value in the area covered by the window size of R × C in fig. 4C is determined, and then the sum of N1, N2 and N3 is determined if the sum of the three is greater than the second set value, e.g., 0.7 × C, the pixel value at (i, j) in the color difference image may be labeled as 1, and if the sum of the three is not greater than 0.7 x (R × C), the pixel value at (i, j) in the color difference image may be labeled as 0.
It should be noted that, for pixels located on the boundary in the first image, for example, pixels located in the first row and the first column, there may be a case where there are no pixels in the area covered by the window size of R × C in the dot product diagram, and at this time, the areas without pixels in the hue difference image, the saturation difference image, and the brightness difference image may be complemented with the second preset value.
S304 a: and performing additive detection based on the color difference image to determine whether the target object has the target additive on the corresponding area.
In specific implementation, the color difference image may be subjected to aggregation processing, and if the aggregation processing result indicates that no aggregation object exists, it indicates that the target object does not have the target attachment on the corresponding region; if the aggregation processing result indicates that the aggregation object exists, the target object is indicated to have the target addition object on the corresponding area, and the minimum rectangular area capable of containing the aggregation object can be determined as the area where the target addition object is located.
In step S203, if the target object has the target attachment on the corresponding area, whether the target attachment on the corresponding area of the target object is compliant is analyzed based on the area information of the target attachment.
In some embodiments, the region information of the target additional object may be transformed based on a position transformation relationship between the first image and the second image to obtain a target region, and if the target region is located in a specified region in the second image, the additional position compliance of the target additional object is determined, and/or if the size of the target region meets a preset size requirement, the size compliance of the target additional object is determined.
In some embodiments, the owner-selectable add-ons of the target object are managed in a unified manner, and at this time, the first image may be cropped based on the region information of the target add-on to obtain a sub-image, the sub-image is compared with the stored image of the compliance add-on, and if the sub-image is included in the image of the compliance add-on, the content compliance of the target add-on is determined; if the sub-picture is not included in the picture of the compliant add-on, the content of the target add-on is determined to be non-compliant.
Considering that the number of compliance additives is large, in order to facilitate management of the compliance additives, the compliance additives may be classified and a correspondence between each type of compliance additive and at least one target object may be specified. For example, the compliance additives are classified into class A and class B, and the class A additives are assigned to be added to the target objects 1-10, and the class B additives are assigned to be added to the target objects 11-20. Then, it is not possible to add the type B additives to the target objects 1-10 or add the type A additives to the target objects 11-20.
For this reason, in the above process, when comparing the sub-image with the stored image of the compliance add-on, the category of the compliance add-on corresponding to the current target object may be determined based on the established correspondence between the target object and the category of the compliance add-on, then the sub-image is compared with the stored image of the compliance add-on corresponding to the category, and if the sub-image is included in the image of the compliance add-on corresponding to the category, it indicates that the corresponding target add-on is the compliance add-on which the current target object is allowed to be added, and the content compliance of the target add-on is determined; if the sub-image is not included in the image of the compliance add-on of the corresponding category, it is indicated that the corresponding target add-on is not a compliance add-on allowing the current target object to be added, and it may be determined that the content of the target add-on is not compliant. In addition, the sub-images are compared with the stored images of the compliance additives of the corresponding types, the number of the images of the compliance additives needing to be compared can be reduced, and therefore the speed of compliance judgment is increased.
In some embodiments, the owner of the target object may freely select the additional object, and at this time, after the first image is cropped based on the area information of the target additional object to obtain the sub-image, if the sub-image does not contain characters and/or images prohibited from being used, the content compliance of the target additional object is determined; if the sub-image contains characters or images prohibited from being used, the content of the target attachment is determined to be not compliant. Whether the prohibited characters or the prohibited images are propagated may have some negative effects, so that the target addition including the prohibited characters and/or images may be determined to be out of compliance.
In step S204, if the target object does not have the target attachment on the corresponding area, whether the target object lacks the target attachment on the corresponding area is analyzed based on the preset attachment requirement.
For the case of the uniform management of the add-on, all the target objects may be required to have the add-on the designated area, that is, the preset add-on requirement is that the target add-on is on the designated area, so that it is also possible to determine whether the target object lacks the target add-on the corresponding area. When it is determined that the target object lacks the target add-on the corresponding area, the user may be further prompted to add the target add-on the corresponding area of the target object.
In addition, in order to improve detection reasonableness, before each first image is compared with a second image in a corresponding image acquisition direction acquired when the target object has no target attachment so as to determine whether the target object has the target attachment on the corresponding area, the first image can be further determined to meet preset detection conditions, and the preset detection conditions include any combination of the following conditions:
the first condition is as follows: the time difference between the acquisition time of the first image and the time of indicating image acquisition on the target object is less than the preset time length;
and a second condition: the distance between the acquisition position of the first image and the position of the target object when the target object is indicated to be subjected to image acquisition is smaller than a preset distance;
and (3) carrying out a third condition: the first image meets a predetermined integrity requirement, for example, the image size is larger than a specified image size, and the image size is larger than the specified image size.
The first condition is used for ensuring that the first image is shot in the last period of time, the second condition is used for ensuring that the first image is shot for the target object, and the third condition is used for ensuring the integrity of the first image so as to improve the comprehensiveness of compliance detection.
In specific implementation, when the size or the position of the target object is determined to be not in compliance, warning information can be sent to remind the owner of the target object of placing the target attachment in compliance; when the content of the target object is determined to be not in compliance, the warning information is sent, and certain functions of the terminal corresponding to the target object can be limited, such as forbidding the corresponding terminal to take over the network car booking order.
The following describes the scheme of the embodiment of the present application, taking the target object as a network car booking and the additional object as an advertisement as an example.
Fig. 5 is a schematic interaction flow diagram for compliance detection of an advertisement on a vehicle according to an embodiment of the present application, where a vehicle end in fig. 5 corresponds to the terminal 100 in fig. 1, and a vehicle networking platform corresponds to the server 200 in fig. 1, and the flow includes the following steps:
in step S501, the internet of vehicles platform issues an image acquisition instruction to the vehicle owner.
During specific implementation, the vehicle networking platform can randomly issue image acquisition instructions to some vehicle owners, and can also only send data acquisition instructions to a designated vehicle owner, wherein the image acquisition instructions are used for instructing the vehicle owners to acquire and upload images of the bound vehicles.
In step S502, the vehicle owner uploads the image acquisition data to the vehicle networking platform.
In specific implementation, after receiving the image acquisition instruction, the vehicle main end can adjust the camera to take a picture of the vehicle, and as shown in fig. 6, the vehicle main end can include pictures in four directions, namely front, back, left and right. Then, image acquisition data including the acquired 4 images, and the acquisition time and the acquisition position of each image are uploaded to the internet of vehicles platform.
In step S503, the internet of vehicles platform checks whether the image meets a preset detection condition, and if not, the process goes to step S504; if yes, the process proceeds to S505.
In specific implementation, the following three-dimensional checks can be performed on each image:
time dimension verification: and judging whether the time difference between the image acquisition time and the time of issuing the image acquisition instruction is smaller than a set time threshold, if so, judging that the image is uploaded in the valid period, otherwise, judging that the image is uploaded after the valid period is exceeded, and uploading the image again.
Spatial dimension verification: and judging whether the distance between the image acquisition position and the position of the vehicle when the image acquisition instruction is issued is smaller than a set distance threshold value, if so, judging that the vehicle is shot, otherwise, judging that the vehicle is not shot, and uploading the image again.
Integrity dimension: inputting the front, back, left and right images into a neural network model, judging that the uploaded image is complete when the output result of the neural network model shows that the front, back, left and right images are complete, otherwise, judging that the uploaded image is incomplete and needing to be uploaded again, wherein the neural network model is trained through a large number of image samples (including the complete automobile body image and the non-complete automobile body image).
In addition, when the images uploaded continuously and repeatedly by the vehicle owner end do not accord with preset detection conditions, the vehicle networking platform can also generate an alarm to remind manual intervention.
In step S504, the internet of vehicles platform instructs the vehicle owner to upload again.
In step S505, the internet of vehicles platform determines whether there is an advertisement on the vehicle based on the image, if not, then the process goes to step S506; if yes, the process proceeds to S507.
In a specific implementation, 4 template images of the front, the back, the left and the right of the vehicle when no advertisement is posted can be obtained in advance, then the front image of the vehicle is compared with the front template image, the back image of the vehicle is compared with the back template image, the left image of the vehicle is compared with the left template image, and the back image of the vehicle is compared with the back template image to determine whether the advertisement is posted on each area of the front, the back, the left and the right of the vehicle.
Since the image comparison process in each direction is similar, the process of detecting whether there is an advertisement on the right side of the vehicle will be described herein as a process of comparing the right side image of the vehicle with the right side template image.
In specific implementation, the right image may be converted from the RGB space to the HSV space to obtain a hue map Hf, a saturation map Sf, and a brightness map Vf, and assuming that the hue map obtained after the right template image is converted to the HSV space is Ht, the saturation map is St, and the brightness map is Vt, the color difference image I may be determined according to the following formula.
(ix,iy)=F(i,j) (1)
Figure BDA0003287158400000181
Figure BDA0003287158400000182
Figure BDA0003287158400000183
Figure BDA0003287158400000184
Wherein (i, j) represents the pixel of the ith row and the jth column in the right image, F represents the position conversion relationship between the right image and the right template image, (ix, jy) represents the pixel corresponding to the pixel of the ith row and the jth column in the right image in the right template image, and Hc(i, j) is a tone difference image, thr _ h is a tone threshold, Sc(i, j) is a saturation difference image, thr _ s is a saturation threshold, Vc(I, j) is a luminance difference map, thr _ v is a luminance threshold value, (m, n) is a region with a length of R and a width of C and centered at (I, j), values of R and C are preset, R and C are both odd numbers, and I (I, j) is a color difference image.
Then, the color difference image can be subjected to aggregation processing, and if an aggregation object is not obtained, the fact that the right side of the vehicle does not have advertisements is indicated; if the aggregation object is obtained, it indicates that there is an advertisement on the right side of the vehicle, and circumscribed rectangular frame information corresponding to each advertisement can be obtained, fig. 7 is a schematic diagram of a rectangular frame of an advertisement posted on the right side of the vehicle according to an embodiment of the present application, and fig. 7 includes 5 rectangular frames, each of which indicates an advertisement.
In step S506, the car networking platform checks whether the preset additional requirements are met.
For example, the network appointment vehicle operator requests the right side of the vehicle to post the advertisement and the right side of the vehicle does not have the advertisement, and then determines that the vehicle does not post the advertisement according to the request, and can send the advertisement posting prompt message to order the vehicle owner to post the advertisement according to the request.
In step S507, the internet of vehicles platform determines whether the advertisement is compliant one by one.
In particular implementation, compliance determination for each advertisement includes the following three aspects:
1. and (5) judging the size compliance.
For example, based on the position conversion relationship between the right image and the right template image, the region information of each advertisement is converted to obtain a target region, and if the size of the target region meets the preset size requirement, the size compliance of the advertisement is determined.
2. And (5) judging the position compliance.
For example, the area information of each advertisement may be transformed based on the position transformation relationship from the right side image to the right side template image to obtain a target area, and if the target area is located in a designated area in the right side template image, the advertisement placement position compliance may be determined.
3. And judging content compliance.
For example, the area where each advertisement is located in fig. 7 is intercepted to obtain a sub-image, the sub-image is compared with the stored image of the compliant advertisement, and if the sub-image is included in the image of the compliant advertisement, the content compliance of the advertisement is determined.
In addition, under the condition that the owner is allowed to freely post the advertisement, after the sub-image is obtained, if the characters and/or images which are forbidden to be used are determined not to be in the sub-image, the content of the corresponding advertisement can be judged to be in compliance; if the character or image which is forbidden to be used is determined in the sub-image, the content of the corresponding advertisement can be judged to be not in compliance.
In specific implementation, when the size or the position of an advertisement on a vehicle is determined to be not in compliance, warning information can be sent to remind a corresponding vehicle owner of posting the advertisement in compliance; when the content of a certain advertisement on the vehicle is determined to be not in compliance, the warning message is sent, and certain functions of the corresponding vehicle owner terminal can be limited, such as prohibiting the vehicle owner terminal from taking over the network appointment order. That is to say, different processing strategies can be adopted aiming at different types of non-compliance of the car main terminal, so that the network car booking operator is facilitated to better manage the network car booking when the advertisement is posted by the network car booking in a standard mode.
In the embodiment of the application, the advertisement posting condition of the vehicle is automatically detected by means of the position conversion relation between the HSV space and the image, and the advertisement posting condition is insensitive to the shape and the color of the advertisement, so that the advertisement posting condition is suitable for detecting various advertisements, and can play a good promoting role in posting the advertisement specification of the online appointment.
When the method provided in the embodiments of the present application is implemented in software or hardware or a combination of software and hardware, a plurality of functional modules may be included in the electronic device, and each functional module may include software, hardware or a combination of software and hardware.
Based on the same technical concept, embodiments of the present application further provide an additional compliance detection apparatus, and the principle of the additional compliance detection apparatus for solving the problem is similar to the foregoing additional compliance detection method, so that the implementation of the additional compliance detection apparatus for an additional can refer to the implementation of the additional compliance detection method, and repeated details are omitted. Fig. 8 is a schematic structural diagram of a compliance detection apparatus of an add-on according to an embodiment of the present application, and includes an obtaining module 801, a determining module 802, and an analyzing module 803.
An acquisition module 801 for acquiring at least one first image of a target object;
a determining module 802, configured to compare each first image with a second image in a corresponding image collecting orientation collected when the target object has no target attachment, so as to determine whether the target object has a target attachment on a corresponding area;
an analyzing module 803, configured to analyze whether the target object is in compliance with the target attachment on the corresponding area based on the area information of the target attachment if the target object has the target attachment on the corresponding area.
In some embodiments, the determining module 802 is specifically configured to:
performing color space conversion on the first image to obtain component values of the first image on each color component in an appointed color space;
performing color component difference analysis based on the component value of the first image on each color component and the component value of the second image on the color component to obtain a component difference image corresponding to the color component;
performing color difference analysis based on the component difference images corresponding to the color components to obtain color difference images;
and performing additive detection based on the color difference image to determine whether the target object has a target additive on the corresponding area.
In some embodiments, the determining module 802 is specifically configured to:
comparing a first component value of each pixel in the first image on each color component with a second component value of a corresponding pixel in the second image on the color component;
if the absolute value of the difference value between the first component value and the second component value is greater than the preset threshold value corresponding to the color component, marking the pixel value of the corresponding pixel in the component difference image corresponding to the color component as a first preset value; and if the absolute value of the difference value between the first component value and the second component value is not greater than the preset threshold value, marking the pixel value of the pixel as a second preset value.
In some embodiments, the determining module 802 is specifically configured to:
performing dot product operation on pixel values corresponding to the same pixel in each component difference image to obtain a dot product image, determining the sum of the pixel values in an area with a specified area size and taking the pixel at the position (i, j) as the center in the dot product image aiming at the pixel (i, j) in the first image, if the sum of the pixel values is larger than a first set value, marking the pixel value at the position (i, j) in the color difference image as a third preset value, and if the sum of the pixel values is not larger than the first set value, marking the pixel value at the position (i, j) in the color difference image as a fourth preset value, wherein i and j are positive integers; or
And for the pixel (i, j) in the first image, determining the number of pixels taking the value as a first preset value in an area with the pixel at the position (i, j) as the center in the component difference image corresponding to each color component, determining the sum of the number of the pixels, if the sum of the number of the pixels is greater than a second set value, marking the pixel value at the position (i, j) in the color difference image as a third preset value, and if the sum of the number of the pixels is not greater than the second set value, marking the pixel value at the position (i, j) in the color difference image as a fourth preset value.
In some embodiments, the analysis module 803 is specifically configured to:
based on the position conversion relation between the first image and the second image, converting the area information of the target attachment to obtain a target area;
if the target area is located in the designated area in the second image, determining an additional position compliance of the target additional object, and/or if the size of the target area meets a preset size requirement, determining a size compliance of the target additional object.
In some embodiments, the analysis module 803 is specifically configured to:
cutting the first image based on the area information of the target attachment to obtain a sub-image;
determining content compliance of the target add-on if the sub-image is included in the saved image of the compliance add-on; or; and if the sub-image does not contain characters and/or images prohibited to be used, determining the content compliance of the target additional object.
In some embodiments, a verification module 804 is further included for:
before comparing each first image with a second image in a corresponding image acquisition orientation acquired when the target object has no target attachment to determine whether the target object has the target attachment on a corresponding area, determining that the first image meets a preset detection condition, wherein the preset detection condition comprises any combination of the following conditions:
the time difference between the acquisition time of the first image and the time of indicating image acquisition on the target object is less than the preset time length;
the distance between the acquisition position of the first image and the position of the target object when the target object is indicated to be subjected to image acquisition is smaller than a preset distance;
the first image meets a preset integrity requirement.
In some embodiments, the analysis module 803 is further configured to:
and if the target object does not have the target additional object on the corresponding area, analyzing whether the target object lacks the target additional object on the corresponding area or not based on a preset additional requirement.
The division of the modules in the embodiments of the present application is schematic, and only one logic function division is provided, and in actual implementation, there may be another division manner, and in addition, each function module in each embodiment of the present application may be integrated in one processor, may also exist alone physically, or may also be integrated in one module by two or more modules. The coupling of the various modules to each other may be through interfaces that are typically electrical communication interfaces, but mechanical or other forms of interfaces are not excluded. Thus, modules described as separate components may or may not be physically separate, may be located in one place, or may be distributed in different locations on the same or different devices. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode.
Having described the comparative learning method and apparatus of the exemplary embodiments of the present application, an electronic device according to another exemplary embodiment of the present application is next described.
As will be appreciated by one skilled in the art, aspects of the present application may be embodied as a system, method or program product. Accordingly, various aspects of the present application may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system.
In some possible implementations, an electronic device according to the present application may include at least one processor, and at least one memory. Wherein the memory stores program code which, when executed by the processor, causes the processor to perform the methods according to the various exemplary embodiments of the present application described above in the present specification. For example, the processor may perform steps in a neural network model training method or steps in a method of extracting image features, such as based on contrast learning.
The electronic device 130 according to this embodiment of the present application is described below with reference to fig. 9. The electronic device 130 shown in fig. 9 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 9, the electronic device 130 is represented in the form of a general electronic device. The components of the electronic device 130 may include, but are not limited to: the at least one processor 131, the at least one memory 132, and a bus 133 that connects the various system components (including the memory 132 and the processor 131).
Bus 133 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, a processor, or a local bus using any of a variety of bus architectures.
The memory 132 may include readable media in the form of volatile memory, such as Random Access Memory (RAM)1321 and/or cache memory 1322, and may further include Read Only Memory (ROM) 1323.
Memory 132 may also include a program/utility 1325 having a set (at least one) of program modules 1324, such program modules 1324 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
The electronic device 130 may also communicate with one or more external devices 134 (e.g., keyboard, pointing device, etc.), with one or more devices that enable a user to interact with the electronic device 130, and/or with any devices (e.g., router, modem, etc.) that enable the electronic device 130 to communicate with one or more other electronic devices. Such communication may occur via input/output (I/O) interfaces 135. Also, the electronic device 130 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet) via the network adapter 136. As shown, network adapter 136 communicates with other modules for electronic device 130 over bus 133. It should be understood that although not shown in the figures, other hardware and/or software modules may be used in conjunction with electronic device 130, including but not limited to: microcode, device drivers, redundant processors, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
In an exemplary embodiment, a computer-readable storage medium comprising instructions, such as the memory 132 comprising instructions, executable by the processor 131 to perform the contrast learning method described above is also provided. Alternatively, the storage medium may be a non-transitory computer readable storage medium, which may be, for example, a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, a computer program product is also provided, comprising a computer program which, when executed by the processor 131, implements the exemplary method as provided herein.
In an exemplary embodiment, various aspects of a method for training a neural network model based on contrast learning and a method for extracting image features provided by the present application may also be implemented in the form of a program product, which includes program code for causing a computer device to perform the steps of the method according to various exemplary embodiments of the present application described above in this specification when the program product is run on the computer device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable Disk, a hard Disk, a RAM, a ROM, an Erasable Programmable Read-Only Memory (EPROM), a flash Memory, an optical fiber, a Compact Disk Read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The program product for compliance detection of addenda in the embodiments of the present application may be in the form of a CD-ROM and include program code, and may be run on a computing device. However, the program product of the present application is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, Radio Frequency (RF), etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In situations involving remote computing devices, the remote computing devices may be connected to the user computing device over any kind of Network, such as a Local Area Network (LAN) or Wide Area Network (WAN), or may be connected to external computing devices (e.g., over the internet using an internet service provider).
It should be noted that although several units or sub-units of the apparatus are mentioned in the above detailed description, such division is merely exemplary and not mandatory. Indeed, the features and functions of two or more units described above may be embodied in one unit, according to embodiments of the application. Conversely, the features and functions of one unit described above may be further divided into embodiments by a plurality of units.
Further, while the operations of the methods of the present application are depicted in the drawings in a particular order, this does not require or imply that these operations must be performed in this particular order, or that all of the illustrated operations must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While the preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (18)

1. A method of compliance testing of an add-on, comprising:
acquiring at least one first image of a target object;
comparing each first image with a second image in a corresponding image acquisition orientation acquired when the target object has no target attachment to determine whether the target object has the target attachment in a corresponding area;
and if the target object has the target additional object on the corresponding area, analyzing whether the target additional object on the corresponding area of the target object is in compliance or not based on the area information of the target additional object.
2. The method of claim 1, wherein comparing each first image to a second image at a corresponding image acquisition orientation acquired when the target object has no target appendage to determine whether the target object has a target appendage on a corresponding region comprises:
performing color space conversion on the first image to obtain component values of the first image on each color component in an appointed color space;
performing color component difference analysis based on the component value of the first image on each color component and the component value of the second image on the color component to obtain a component difference image corresponding to the color component;
performing color difference analysis based on the component difference images corresponding to the color components to obtain color difference images;
and performing additive detection based on the color difference image to determine whether the target object has a target additive on the corresponding area.
3. The method of claim 2, wherein performing a color component difference analysis based on the component values of the first image on each color component and the component values of the second image on the color components to obtain component difference images corresponding to the color components comprises:
comparing a first component value of each pixel in the first image on each color component with a second component value of a corresponding pixel in the second image on the color component;
if the absolute value of the difference value between the first component value and the second component value is greater than the preset threshold value corresponding to the color component, marking the pixel value of the corresponding pixel in the component difference image corresponding to the color component as a first preset value; and if the absolute value of the difference value between the first component value and the second component value is not greater than the preset threshold value, marking the pixel value of the pixel as a second preset value.
4. The method of claim 3, wherein performing a color difference analysis based on the component difference image corresponding to each color component to obtain a color difference image comprises:
performing dot product operation on pixel values corresponding to the same pixel in each component difference image to obtain a dot product image, determining the sum of the pixel values in an area with a specified area size and taking the pixel at the position (i, j) as the center in the dot product image aiming at the pixel (i, j) in the first image, if the sum of the pixel values is larger than a first set value, marking the pixel value at the position (i, j) in the color difference image as a third preset value, and if the sum of the pixel values is not larger than the first set value, marking the pixel value at the position (i, j) in the color difference image as a fourth preset value, wherein i and j are positive integers; or
And for the pixel (i, j) in the first image, determining the number of pixels taking the value as a first preset value in an area with the pixel at the position (i, j) as the center in the component difference image corresponding to each color component, determining the sum of the number of the pixels, if the sum of the number of the pixels is greater than a second set value, marking the pixel value at the position (i, j) in the color difference image as a third preset value, and if the sum of the number of the pixels is not greater than the second set value, marking the pixel value at the position (i, j) in the color difference image as a fourth preset value.
5. The method of claim 1, wherein analyzing whether the target object is compliant with the target add-on the corresponding region based on the region information of the target add-on comprises:
based on the position conversion relation between the first image and the second image, converting the area information of the target attachment to obtain a target area;
if the target area is located in the designated area in the second image, determining an additional position compliance of the target additional object, and/or if the size of the target area meets a preset size requirement, determining a size compliance of the target additional object.
6. The method of claim 1 or 5, wherein analyzing whether the target object is compliant with the target add-on the corresponding region based on the region information of the target add-on comprises:
cutting the first image based on the area information of the target attachment to obtain a sub-image;
determining content compliance of the target add-on if the sub-image is included in the saved image of the compliance add-on; or; and if the sub-image does not contain characters and/or images prohibited to be used, determining the content compliance of the target additional object.
7. The method of claim 1, wherein comparing each first image to a second image at a corresponding image acquisition orientation acquired when the target object has no target add-on to determine whether the target object has a target add-on a corresponding region further comprises:
determining that the first image meets a preset detection condition, wherein the preset detection condition comprises any combination of the following conditions:
the time difference between the acquisition time of the first image and the time of indicating image acquisition on the target object is less than the preset time length;
the distance between the acquisition position of the first image and the position of the target object when the target object is indicated to be subjected to image acquisition is smaller than a preset distance;
the first image meets a preset integrity requirement.
8. The method of claim 1, further comprising:
and if the target object does not have the target additional object on the corresponding area, analyzing whether the target object lacks the target additional object on the corresponding area or not based on a preset additional requirement.
9. An attachment compliance detection device, comprising:
an acquisition module for acquiring at least one first image of a target object;
the determining module is used for comparing each first image with a second image in a corresponding image acquisition direction acquired when the target object has no target additive to determine whether the target object has the target additive in a corresponding area;
and the analysis module is used for analyzing whether the target additional object on the corresponding area of the target object is in compliance or not based on the area information of the target additional object if the target object has the target additional object on the corresponding area.
10. The apparatus of claim 9, wherein the determination module is specifically configured to:
performing color space conversion on the first image to obtain component values of the first image on each color component in an appointed color space;
performing color component difference analysis based on the component value of the first image on each color component and the component value of the second image on the color component to obtain a component difference image corresponding to the color component;
performing color difference analysis based on the component difference images corresponding to the color components to obtain color difference images;
and performing additive detection based on the color difference image to determine whether the target object has a target additive on the corresponding area.
11. The apparatus of claim 10, wherein the determination module is specifically configured to:
comparing a first component value of each pixel in the first image on each color component with a second component value of a corresponding pixel in the second image on the color component;
if the absolute value of the difference value between the first component value and the second component value is greater than the preset threshold value corresponding to the color component, marking the pixel value of the corresponding pixel in the component difference image corresponding to the color component as a first preset value; and if the absolute value of the difference value between the first component value and the second component value is not greater than the preset threshold value, marking the pixel value of the pixel as a second preset value.
12. The apparatus of claim 11, wherein the determination module is specifically configured to:
performing dot product operation on pixel values corresponding to the same pixel in each component difference image to obtain a dot product image, determining the sum of the pixel values in an area with a specified area size and taking the pixel at the position (i, j) as the center in the dot product image aiming at the pixel (i, j) in the first image, if the sum of the pixel values is larger than a first set value, marking the pixel value at the position (i, j) in the color difference image as a third preset value, and if the sum of the pixel values is not larger than the first set value, marking the pixel value at the position (i, j) in the color difference image as a fourth preset value, wherein i and j are positive integers; or
And for the pixel (i, j) in the first image, determining the number of pixels taking the value as a first preset value in an area with the pixel at the position (i, j) as the center in the component difference image corresponding to each color component, determining the sum of the number of the pixels, if the sum of the number of the pixels is greater than a second set value, marking the pixel value at the position (i, j) in the color difference image as a third preset value, and if the sum of the number of the pixels is not greater than the second set value, marking the pixel value at the position (i, j) in the color difference image as a fourth preset value.
13. The apparatus of claim 9, wherein the analysis module is specifically configured to:
based on the position conversion relation between the first image and the second image, converting the area information of the target attachment to obtain a target area;
if the target area is located in the designated area in the second image, determining an additional position compliance of the target additional object, and/or if the size of the target area meets a preset size requirement, determining a size compliance of the target additional object.
14. The apparatus of claim 9 or 13, wherein the analysis module is specifically configured to:
cutting the first image based on the area information of the target attachment to obtain a sub-image;
determining content compliance of the target add-on if the sub-image is included in the saved image of the compliance add-on; or; and if the sub-image does not contain characters and/or images prohibited to be used, determining the content compliance of the target additional object.
15. The apparatus of claim 9, further comprising a verification module to:
before comparing each first image with a second image in a corresponding image acquisition orientation acquired when the target object has no target attachment to determine whether the target object has the target attachment on a corresponding area, determining that the first image meets a preset detection condition, wherein the preset detection condition comprises any combination of the following conditions:
the time difference between the acquisition time of the first image and the time of indicating image acquisition on the target object is less than the preset time length;
the distance between the acquisition position of the first image and the position of the target object when the target object is indicated to be subjected to image acquisition is smaller than a preset distance;
the first image meets a preset integrity requirement.
16. The apparatus of claim 9, wherein the analysis module is further to:
and if the target object does not have the target additional object on the corresponding area, analyzing whether the target object lacks the target additional object on the corresponding area or not based on a preset additional requirement.
17. An electronic device, comprising: at least one processor, and a memory communicatively coupled to the at least one processor, wherein:
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-8.
18. A storage medium, wherein instructions in the storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the method of any of claims 1-8.
CN202111151152.8A 2021-09-29 2021-09-29 Compliance detection method and device for additional object, electronic equipment and storage medium Active CN113870226B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111151152.8A CN113870226B (en) 2021-09-29 2021-09-29 Compliance detection method and device for additional object, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111151152.8A CN113870226B (en) 2021-09-29 2021-09-29 Compliance detection method and device for additional object, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113870226A true CN113870226A (en) 2021-12-31
CN113870226B CN113870226B (en) 2024-03-22

Family

ID=78992743

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111151152.8A Active CN113870226B (en) 2021-09-29 2021-09-29 Compliance detection method and device for additional object, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113870226B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109389126A (en) * 2017-08-10 2019-02-26 杭州海康威视数字技术股份有限公司 A kind of method for detecting area based on color, device and electronic equipment
US20200020128A1 (en) * 2018-07-10 2020-01-16 Fuji Xerox Co., Ltd. Article-for-posting management system and non-transitory computer readable medium
CN111507411A (en) * 2020-04-20 2020-08-07 北京英迈琪科技有限公司 Image comparison method and system
CN113191293A (en) * 2021-05-11 2021-07-30 创新奇智(重庆)科技有限公司 Advertisement detection method, device, electronic equipment, system and readable storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109389126A (en) * 2017-08-10 2019-02-26 杭州海康威视数字技术股份有限公司 A kind of method for detecting area based on color, device and electronic equipment
US20200020128A1 (en) * 2018-07-10 2020-01-16 Fuji Xerox Co., Ltd. Article-for-posting management system and non-transitory computer readable medium
CN111507411A (en) * 2020-04-20 2020-08-07 北京英迈琪科技有限公司 Image comparison method and system
CN113191293A (en) * 2021-05-11 2021-07-30 创新奇智(重庆)科技有限公司 Advertisement detection method, device, electronic equipment, system and readable storage medium

Also Published As

Publication number Publication date
CN113870226B (en) 2024-03-22

Similar Documents

Publication Publication Date Title
US11790772B2 (en) Traffic light image processing
CN112101305B (en) Multi-path image processing method and device and electronic equipment
CN111695609B (en) Target damage degree judging method and device, electronic equipment and storage medium
US20190197328A1 (en) Method and apparatus for outputting information
CN112270309A (en) Vehicle access point equipment snapshot quality evaluation method and device and readable medium
CN112288716B (en) Method, system, terminal and medium for detecting bundling state of steel coil
CN111178357B (en) License plate recognition method, system, device and storage medium
JP7429756B2 (en) Image processing method, device, electronic device, storage medium and computer program
EP4080479A2 (en) Method for identifying traffic light, device, cloud control platform and vehicle-road coordination system
CN110263301B (en) Method and device for determining color of text
CN111382695A (en) Method and apparatus for detecting boundary points of object
CN113902740A (en) Construction method of image blurring degree evaluation model
CN115134537A (en) Image processing method and device and vehicle
CN117455762A (en) Method and system for improving resolution of recorded picture based on panoramic automobile data recorder
CN113870226B (en) Compliance detection method and device for additional object, electronic equipment and storage medium
US20230048649A1 (en) Method of processing image, electronic device, and medium
CN114821513B (en) Image processing method and device based on multilayer network and electronic equipment
CN116468914A (en) Page comparison method and device, storage medium and electronic equipment
CN115712746A (en) Image sample labeling method and device, storage medium and electronic equipment
CN115019511A (en) Method and device for identifying illegal lane change of motor vehicle based on automatic driving vehicle
CN115273025A (en) Traffic asset checking method, device, medium and electronic equipment
CN114677649A (en) Image recognition method, apparatus, device and medium
JP2022006180A (en) Hand shaking correction method of image, device, electronic device, storage media, computer program product, roadside machine and cloud control platform
CN116013091B (en) Tunnel monitoring system and analysis method based on traffic flow big data
CN111553210B (en) Training method of lane line detection model, lane line detection method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant