CN112396649B - Image processing method, device, computer system and readable storage medium - Google Patents

Image processing method, device, computer system and readable storage medium Download PDF

Info

Publication number
CN112396649B
CN112396649B CN201910767309.6A CN201910767309A CN112396649B CN 112396649 B CN112396649 B CN 112396649B CN 201910767309 A CN201910767309 A CN 201910767309A CN 112396649 B CN112396649 B CN 112396649B
Authority
CN
China
Prior art keywords
image
visible light
scanning
light image
visible
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910767309.6A
Other languages
Chinese (zh)
Other versions
CN112396649A (en
Inventor
吴南南
吴凡
马艳芳
彭华
赵世锋
王涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nuctech Co Ltd
Original Assignee
Nuctech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nuctech Co Ltd filed Critical Nuctech Co Ltd
Priority to CN201910767309.6A priority Critical patent/CN112396649B/en
Priority to PCT/CN2020/089633 priority patent/WO2021031626A1/en
Publication of CN112396649A publication Critical patent/CN112396649A/en
Application granted granted Critical
Publication of CN112396649B publication Critical patent/CN112396649B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Analysing Materials By The Use Of Radiation (AREA)

Abstract

The present disclosure provides an image processing method, including: acquiring a scanning image marked with a suspected object, wherein the scanning image is obtained by scanning an inspected object through security inspection equipment; determining a visible light image corresponding to the scanning image according to the starting scanning time of the scanning image, wherein the visible light image is acquired by a visible light image acquisition device on an inspected object; and marking the suspected object in the visible light image according to the marking position of the suspected object in the scanned image. The present disclosure also provides an image processing apparatus, a computer system, and a computer-readable storage medium.

Description

Image processing method, device, computer system and readable storage medium
Technical Field
The present disclosure relates to an image processing method, an image processing apparatus, a computer system, and a computer-readable storage medium.
Background
In the places with more flow of people such as subway stations or railway stations, in order to ensure the safety of personnel and the normal running of vehicles, security check equipment is usually required to be arranged, and the security check equipment can detect packages carried by passengers. In the security inspection process, a passenger places a package on one side of security inspection equipment, the security inspection equipment conveys the package into a security inspection box body to conduct X-ray scanning, then the package is conveyed out of the security inspection box body, and staff judges whether suspicious articles exist in the package by checking an X-ray scanning image.
When staff judges that suspicious articles exist in the package, unpacking inspection needs to be carried out locally at a security inspection point. However, the local security check staff cannot accurately position the suspicious object when carrying out unpacking check because the acquired package information is very little. Especially, under the background that the centralized picture judging system is used on a large scale, the package opening and checking link is established on the premise that the baggage which is judged to be opened and checked by the remote picture judging system cannot be taken away by passengers, and the local security check staff cannot accurately position the suspicious object during package opening and checking due to the fact that the obtained package information is very little, so that the problem of low package opening efficiency and the like is caused.
Disclosure of Invention
One aspect of the present disclosure provides an image processing method including: acquiring a scanning image marked with a suspected object, wherein the scanning image is obtained by scanning an inspected object through security inspection equipment; determining a visible light image corresponding to the scanning image according to the starting scanning time of the scanning image, wherein the visible light image is acquired by a visible light image acquisition device; and marking the suspected object in the visible light image according to the marking position of the suspected object in the scanning image.
Optionally, the image processing method further includes: and displaying the visible light image marked with the suspicious object on the electronic equipment of the inspection station to indicate the position of the suspicious object.
Optionally, the determining the visible light image corresponding to the scanned image according to the starting scanning time of the scanned image includes: acquiring the transmission time length of the checked object in the security inspection equipment; determining the acquisition time of the visible light image according to the transmission time of the checked object in the security inspection equipment and the starting scanning time of the scanning image; and determining the visible light image corresponding to the scanning image from the images acquired by the visible light image acquisition equipment according to the acquisition time of the visible light image.
Optionally, marking the suspected object in the visible light image according to the marked position of the suspected object in the scanned image includes: acquiring a pixel mapping relation between the scanning image and the visible light image; determining the marking position of the suspected object in the visible light image according to the marking position of the suspected object in the scanning image, the size information of the scanning image and the pixel mapping relation; and marking the suspected object in the visible light image according to the marking position of the suspected object in the visible light image.
Optionally, determining the marking position of the suspected object in the visible light image according to the marking position of the suspected object in the scanned image, the size information of the scanned image and the pixel mapping relation according to the following formula:
x2=(x1/L1)*L3*K,y2=(y1/W1)*(W3-c-d)+d,
L4=(L2/L1)*L3*K,W4=(W2/W1)*(W3-c-d);
Wherein coordinates of a suspect frame in the scan image are (x 1,y1),L1 is a length of the scan image, W 1 is a width of the scan image, L 2 is a length of the suspect frame in the scan image, W 2 is a width of the suspect frame in the scan image, coordinates of the suspect frame in the visible image are (x 2,y2),L3 is a length of the visible image, W 3 is a width of the visible image, L 4 is a length of the suspect frame in the visible image, W 4 is a width of the suspect frame in the visible image, K is the pixel map relation, c is a width of a gap between an upper edge of the belt in the visible image and an upper edge of the visible image, and d is a width of a gap between a lower edge of the belt in the visible image and a lower edge of the visible image.
The image processing apparatus for acquiring a scanned image of a suspected object includes: acquiring a scanned image of a suspected object marked from a graph judging station; and/or acquiring a scanned image of the marked suspected object from the automatic judgment server.
The image processing device comprises an acquisition module, a detection module and a detection module, wherein the acquisition module is used for acquiring a scanning image of a suspected object marked with the suspected object, and the scanning image is obtained by scanning an object to be inspected through security inspection equipment; the determining module is used for determining a visible light image corresponding to the scanning image according to the starting scanning time of the scanning image, wherein the visible light image is acquired by a visible light image acquisition device on the checked object; and the marking module is used for marking the suspected object in the visible light image according to the marking position of the suspected object in the scanning image.
Optionally, the image processing apparatus further includes a display module, configured to display a visible light image marked with a suspected object on an electronic device of the inspection station, so as to indicate a position of the suspected object.
Optionally, the determining module includes: a first acquiring unit configured to acquire a transmission time length of the inspected object in the security inspection device; a first determining unit configured to determine an acquisition time of the visible light image according to a transmission time period of the object to be inspected in the security inspection apparatus and a start scanning time of the scanning image; and a second determining unit, configured to determine, from the images acquired by the visible light image acquisition device, a visible light image corresponding to the scanned image according to the acquisition time of the visible light image.
Optionally, the marking module includes: a second acquisition unit configured to acquire a pixel mapping relationship between the scanned image and the visible light image; a third determining unit configured to determine a marking position of the suspected substance in the visible light image according to the marking position of the suspected substance in the scanned image, the size information of the scanned image, and the pixel mapping relationship; and a marking unit configured to mark a suspected object in the visible light image according to a marking position of the suspected object in the visible light image.
Optionally, determining the marking position of the suspected object in the visible light image according to the marking position of the suspected object in the scanned image, the size information of the scanned image and the pixel mapping relation according to the following formula:
x2=(x1/L1)*L3*K,y2=(y1/W1)*(W3-c-d)+d,
L4=(L2/L1)*L3*K,W4=(W2/W1)*(W3-c-d);
Wherein coordinates of a suspect frame in the scan image are (x 1,y1),L1 is a length of the scan image, W 1 is a width of the scan image, L 2 is a length of the suspect frame in the scan image, W 2 is a width of the suspect frame in the scan image, coordinates of the suspect frame in the visible image are (x 2,y2),L3 is a length of the visible image, W 3 is a width of the visible image, L 4 is a length of the suspect frame in the visible image, W 4 is a width of the suspect frame in the visible image, K is the pixel map relation, c is a width of a gap between an upper edge of the belt in the visible image and an upper edge of the visible image, and d is a width of a gap between a lower edge of the belt in the visible image and a lower edge of the visible image.
Optionally, the acquiring module is configured to acquire a scanned image of the suspected object marked from the image judging station; and/or acquiring a scanned image of the marked suspected object from the automatic judgment server.
Another aspect of the present disclosure provides a computer system comprising: one or more processors; and a memory for storing one or more programs, wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method as described above.
Another aspect of the present disclosure provides a computer-readable storage medium storing computer-executable instructions that, when executed, are configured to implement a method as described above.
Another aspect of the present disclosure provides a computer program comprising computer executable instructions which, when executed, are adapted to carry out the method as described above.
Drawings
For a more complete understanding of the present disclosure and the advantages thereof, reference is now made to the following descriptions taken in conjunction with the accompanying drawings, in which:
fig. 1 schematically illustrates an application scenario of an image processing method and apparatus according to an embodiment of the present disclosure;
FIG. 2 schematically illustrates a schematic view of a security inspection device according to another embodiment of the present disclosure;
FIG. 3 schematically illustrates a flow chart of an image processing method according to an embodiment of the disclosure;
Fig. 4 schematically illustrates a schematic diagram of determining a visible light image corresponding to a scanned image according to a start scanning time of the scanned image according to an embodiment of the present disclosure;
fig. 5 schematically shows a schematic view of an X-ray image scanned at a main view angle;
Fig. 6 schematically shows a schematic view of one of the frames of visible light images taken by the camera;
fig. 7 schematically shows a schematic view of an X-ray image scanned at a sub-view angle;
fig. 8 schematically shows a schematic view of another frame of a visible light image taken by a camera;
FIG. 9 schematically illustrates a schematic diagram for characterizing the size of an X-ray image;
FIG. 10 schematically illustrates a schematic diagram for characterizing visible light image size;
FIG. 11 schematically illustrates another diagram for characterizing visible light image size;
fig. 12 schematically shows a block diagram of an image processing apparatus according to an embodiment of the present disclosure; and
Fig. 13 schematically illustrates a block diagram of a computer system suitable for implementing the image processing methods and apparatus according to embodiments of the present disclosure.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is only exemplary and is not intended to limit the scope of the present disclosure. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the present disclosure. It may be evident, however, that one or more embodiments may be practiced without these specific details. In addition, in the following description, descriptions of well-known structures and techniques are omitted so as not to unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and/or the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It should be noted that the terms used herein should be construed to have meanings consistent with the context of the present specification and should not be construed in an idealized or overly formal manner.
Where a convention analogous to "at least one of A, B and C, etc." is used, in general such a convention should be interpreted in accordance with the meaning of one of skill in the art having generally understood the convention (e.g., "a system having at least one of A, B and C" would include, but not be limited to, systems having a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.). Where a formulation similar to at least one of "A, B or C, etc." is used, in general such a formulation should be interpreted in accordance with the ordinary understanding of one skilled in the art (e.g. "a system with at least one of A, B or C" would include but not be limited to systems with a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.).
Some of the block diagrams and/or flowchart illustrations are shown in the figures. It will be understood that some blocks of the block diagrams and/or flowchart illustrations, or combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the instructions, when executed by the processor, create means for implementing the functions/acts specified in the block diagrams and/or flowchart. The techniques of this disclosure may be implemented in hardware and/or software (including firmware, microcode, etc.). Additionally, the techniques of this disclosure may take the form of a computer program product on a computer-readable storage medium having instructions stored thereon, the computer program product being for use by or in connection with an instruction execution system.
Embodiments of the present disclosure provide an image processing method, an image processing apparatus, a computer system, and a computer-readable storage medium. The method comprises the following steps: acquiring a scanning image marked with a suspected object, wherein the scanning image is obtained by scanning an inspected object through security inspection equipment; determining a visible light image corresponding to the scanning image according to the starting scanning time of the scanning image, wherein the visible light image is acquired by a visible light image acquisition device on an inspected object; and marking the suspected object in the visible light image according to the marking position of the suspected object in the scanned image.
Fig. 1 schematically illustrates an application scenario of an image processing method and apparatus according to an embodiment of the present disclosure. It should be noted that fig. 1 is merely an example of a scenario in which embodiments of the present disclosure may be applied to assist those skilled in the art in understanding the technical content of the present disclosure, but does not mean that embodiments of the present disclosure may not be used in other devices, systems, environments, or scenarios.
As shown in fig. 1, in the application scenario 100, an object of a passenger needs to be detected by the security inspection device 110, the security inspection device 110 may perform X-ray scanning detection on the object, and may send a scanned image (for example, an X-ray image) of the detected object to the image determining station 120 through the network 130 in real time. The image determination station 120 may include, for example, a display that may display, in real-time, an X-ray image of the item sent by the security device 110. According to embodiments of the present disclosure, the graph judging station 120 may be a remote graph judging station or a local graph judging station.
According to an embodiment of the disclosure, the security inspection device 110 may generate a graph-judging task after performing X-ray scanning detection on the article, send the graph-judging task to the task scheduling center, and allocate the graph-judging station by the task scheduling center. The task scheduling centers can be multiple, and each task scheduling center can be in communication connection with the graph judging station and the security inspection equipment. The diagramming station may also comprise a plurality of stations. According to the embodiment of the disclosure, the communication framework of the security inspection equipment, the task scheduling center and the image judging station can be designed in a decentralizing intelligent distributed mode, the suspected objects in the visible light images are marked by using the image processing method provided by the disclosure based on the communication framework, the cooperation of unpacking inspection and remote image judging can be realized, and local security inspection personnel can be helped to quickly and accurately find out unpacking inspection packages and suspected objects in the packages.
According to embodiments of the present disclosure, a diagnostician may view an X-ray image of an item through a display and, upon finding a suspected item, send a start check instruction to the security check device 110 and/or the start check station 140. After receiving the inspection instruction, the security inspection device 110 and/or the inspection station 140 notify the local inspection operator to take out the corresponding article from the security inspection device 110 for unpacking inspection. According to embodiments of the present disclosure, a suspected object in an X-ray image may be marked, and the marked image may be sent to the inspection station 140.
According to embodiments of the present disclosure, a visible light image acquisition device may be disposed on the security inspection device 110. For example, the visible light image capturing apparatus includes an image pickup device 111 and/or an image pickup device 112, and the image pickup device 111 and/or the image pickup device 112 may be disposed above the security inspection box 113 of the security inspection apparatus 110. The imaging device 111 and/or the imaging device 112 may be used to acquire a visible light image of the detected object.
According to embodiments of the present disclosure, the security inspection apparatus 110 may transmit the visible light image acquired by the image capturing device 111 and/or the image capturing device 112 to the inspection station 140.
According to an embodiment of the present disclosure, the security inspection box 113 is provided with an article inlet and an article outlet, through which the transfer device 114 may pass, and both ends of the transfer device 114 are exposed to the outside of the security inspection box 113. The conveyor 114 may be, for example, a conveyor belt.
The inner side of the top of the security inspection box 113 may be provided with an X-ray scanning device, for example, which may perform X-ray scanning on the articles passing through the security inspection box.
According to the embodiment of the disclosure, the inspection station 140 may perform matching binding on the visible light image of the package and the X-ray image of the package, so as to assist a local security inspector to quickly and accurately find the package to be subjected to the unpacking inspection.
According to embodiments of the present disclosure, the inspection station 140 may mark the suspected object in the visible light image according to the marking position of the suspected object in the X-ray image.
According to the embodiment of the disclosure, when a local unpacker performs unpacking inspection, the position of the suspicious object can be primarily judged through the suspicious object marked on the visible light image of the package, and the position of the suspicious object in the package can be further confirmed through the suspicious frame on the X-ray image, the artificial intelligent automatic identification result, the voice prompt of the diagrammer and the like according to the X-ray image of the package, so that the suspicious object in the package is searched.
According to an embodiment of the present disclosure, after the local inspector unpacks and inspects, the processing conditions can be recorded at the unpacking station 140, the processing results include release, forfex, and police handing over, and the type of processing conclusion can be customized according to the specific business requirements of the customer.
Fig. 2 schematically illustrates a schematic view of a security inspection device according to another embodiment of the present disclosure.
As shown in fig. 2, the security inspection device 200 may include a baffle 210 in addition to the security inspection box.
A mounting groove may be provided on an inner surface of a curved portion of the top of the baffle 210, wherein the curved portion of the top may refer to a horizontal portion of the top. The mounting groove may be used to mount the image pickup device 220 and the light supplementing device 230.
The light supplementing device 230 may provide light when the image capturing device 220 captures a visible light image.
According to the embodiment of the disclosure, for example, a baffle is installed at the exit of the security inspection device 200 at the side where the passenger walks, so as to ensure that the diagramming staff has enough time to perform the diagramming operation, and avoid the situation that the passenger takes the parcel without the diagramming conclusion. The inboard draw-in groove that installs LED light filling lamp and luggage snap shot camera of design in baffle top, LED light filling lamp and luggage snap shot camera install inside, can not be disturbed by passenger or staff, can fully guarantee the light filling and take a photograph the effect. The baggage candid camera is used for taking a photograph of the appearance of the package.
According to the embodiment of the disclosure, after an opening inspection instruction is made on a graph judging task with contraband, a graph judging conclusion can be transmitted back to security inspection equipment, and a system automatically triggers an audible and visual alarm of a graph judging task source security inspection point to prompt a local security inspection personnel that suspicious packages need to be intercepted and unpacked.
According to the embodiment of the disclosure, a portal frame can be designed on one side of the baffle plate, and an emergency stop button, a reset button, a belt start-stop button, an indicator lamp (comprising a buzzer) and a belt start-stop button are respectively arranged on the portal frame. The reset button is used when the indicator lamp buzzer alarms, and pressing the reset button stops the alarm. The indicator light and the buzzer can be integrated, and the indicator light has three states of green, red and yellow. The safety inspection device comprises a safety inspection point, an indicator lamp, an X-ray image display module, a display module and a display module, wherein the indicator lamp displays green when the safety inspection device is in a normal working state, the indicator lamp displays yellow when the safety inspection point is offline, and the indicator lamp displays red when the X-ray image is judged to be a starting inspection conclusion. The buzzer may alarm in two cases: and judging the security inspection point offline and the X-ray image by the remote judgment chart as a starting inspection conclusion. When the buzzer alarms, the local start inspector can press the reset button, and the alarm is stopped.
According to the embodiment of the disclosure, in terms of usability, the reset button and the indicator lamp are integrated on the portal frame uniformly, so that the operation of on-site security inspection personnel is easier. After the package is opened and inspected and the audible and visual alarm is triggered, the on-site security personnel can receive the alarm notification at the first time and can conveniently and rapidly press the reset button to stop the audible and visual alarm.
Fig. 3 schematically shows a flowchart of an image processing method according to an embodiment of the present disclosure.
It should be noted that the method shown in fig. 3 may be performed by the electronic device at the inspection start station 140 shown in fig. 1. Of course, the present disclosure is not limited thereto. For example, the method shown in fig. 3 may be directly executed by the security inspection device 110, and when the security inspection device 110 is provided with a display screen, a visible light image marked with the suspicious object may also be directly displayed on the electronic device of the security inspection device 110, so as to indicate the position of the suspicious object.
As shown in fig. 3, the method includes operations S310 to S330.
In operation S310, a scan image of the suspected object is acquired, wherein the scan image is obtained by scanning the inspected object through the security inspection device.
According to embodiments of the present disclosure, scanned images of a marked suspected object from a diagnostic station may be acquired. The staff at the image judging station can manually mark the position of the suspected object, mark the area where the suspected object is located, and then send the scanned image marked with the suspected object to the inspection starting station, or of course, send the scanned image to other staff needing to be sent, for example, the scanned image can be directly sent to electronic equipment held by the inspection starting station.
According to embodiments of the present disclosure, scanned images of suspected objects marked from an automatic judgment server may also be acquired. The automatic graph judging server can automatically mark the suspects by using an artificial intelligence algorithm.
According to an embodiment of the present disclosure, the scanned image may be, for example, an X-ray image.
In operation S320, a visible light image corresponding to the scanned image is determined according to a start scanning time of the scanned image, wherein the visible light image is acquired by the visible light image acquisition device for the inspected object.
According to embodiments of the present disclosure, each scanned image may correspond to one or more frames of visible light images. The acquisition time of the first frame of visible light image of the article can be obtained by adding the starting scanning time of each scanning image and the time of the article transmitted in the security inspection equipment, and the visible light image corresponding to the scanning image can be determined from a large number of visible light images according to the acquisition time of the first frame of visible light image of the article. According to embodiments of the present disclosure, the time at which the item is conveyed in the security device may be fixed.
According to embodiments of the present disclosure, the visible light image capturing device may be, for example, a visible light camera.
In operation S330, the suspected object in the visible light image is marked according to the marking position of the suspected object in the scanned image.
According to embodiments of the present disclosure, a visible light image marked with a suspect may be displayed on an electronic device of a checkstop to indicate the location of the suspect.
Through marking suspects in the visible light image, cooperation of unpacking inspection and remote judgment can be realized, and local staff is helped to quickly and accurately find unpacking inspection packages and suspects in the packages.
The method shown in fig. 3 is further described with reference to fig. 4-11 in conjunction with the exemplary embodiment.
According to an embodiment of the present disclosure, determining a visible light image corresponding to a scanned image according to a start scanning time of the scanned image includes: acquiring the transmission time length of an inspected object in security inspection equipment; determining the acquisition time of the visible light image according to the transmission time of the checked object in the security inspection equipment and the starting scanning time of the scanned image; and determining the visible light image corresponding to the scanning image from the images acquired by the visible light image acquisition equipment according to the acquisition time of the visible light image.
Fig. 4 schematically illustrates a schematic diagram of determining a visible light image corresponding to a scanned image according to a start scanning time of the scanned image according to an embodiment of the present disclosure.
As shown in fig. 4, the X-ray machine in the security inspection device starts scanning the image at time t 0, and when the security inspection device can upload an X-ray scanning image (hereinafter referred to as an X-ray image), the X-ray image information may include the start scanning time, and the package photo is found by the start scanning time.
According to embodiments of the present disclosure, the baggage candling camera may be located at the exit of the security check machine, such as camera 112 shown in fig. 1or camera 220 shown in fig. 2. It is assumed that the front end position a of the camera shooting range, the position b of the X-ray machine beam exit surface, and the belt advancing speed v are known. Let the distance between position a and position b be Deltax, the beginning scanning time be t 0, the time when the package arrives at position a be t 1, the acquisition time of the visible light image be t 1, deltax/v be the transfer time of the baggage in the security inspection equipment, then t 1=t0 + Deltax/v.
According to embodiments of the present disclosure, security inspection apparatuses may be classified into two types, a single view and a double view. When the security inspection equipment is of a double-view type, the security inspection equipment is respectively a main view angle and a secondary view angle, the luggage is scanned by an X-ray machine to obtain two X-ray images,
According to embodiments of the present disclosure, the X-ray image obtained by the main view scan is obtained by scanning at a right angle to the package, similar to scanning the package directly above and below the package.
Fig. 5 schematically shows a schematic view of an X-ray image scanned at a main view angle.
As shown in fig. 5, the position of the suspected object included in the X-ray image may be marked, for example, in a manner of a suspected frame, as shown by a dotted frame in fig. 5.
Fig. 6 schematically shows a schematic view of one of the frames of visible light images taken by the camera.
And marking the suspects in the visible light image according to the suspects frames in the X-ray image obtained by scanning under the main visual angle. As shown in fig. 6, the positions of the suspects included in the visible light image may be marked, for example, in a suspects frame, as shown by the dashed line frame in fig. 6.
According to embodiments of the present disclosure, the X-ray image obtained by the sub-view scan is obtained by scanning at an angle parallel to the package, similar to scanning the package at its sides.
Fig. 7 schematically shows a schematic view of an X-ray image scanned at a sub-view angle.
As shown in fig. 7, the X-ray image may include a position of a suspected object, and for example, the position may be marked as a suspected frame.
Fig. 8 schematically shows a schematic view of another frame of a visible light image taken by a camera.
And marking the suspected object in the visible light image according to the suspected frame in the X-ray image obtained by scanning under the auxiliary view angle. As shown in fig. 8, the visible light image may include a position of a suspected object, and for example, the position may be marked as a suspected frame.
According to the embodiment of the disclosure, when the security inspection device is of a single view type, the baggage is scanned by an X-ray machine to obtain an X-ray image, which may be obtained by scanning at a main view or a sub view.
According to an embodiment of the present disclosure, marking a suspected object in a visible light image according to a marking position of the suspected object in a scanned image includes: acquiring a pixel mapping relation between a scanning image and a visible light image; determining the mark position of the suspected object in the visible light image according to the mark position of the suspected object in the scanned image, the size information of the scanned image and the pixel mapping relation; and marking the suspected object in the visible light image according to the marking position of the suspected object in the visible light image.
According to the embodiment of the disclosure, a diagnostician can draw a suspected frame on a suspected object in a scanned image at any view angle, and the marking method of a parcel photo is divided into a single view angle and a double view angle.
According to an embodiment of the present disclosure, under a single view angle, a single view angle scan image for drawing a suspected frame may be shown with reference to fig. 5, a photograph of a package marked with the suspected frame is shown in fig. 6, and a specific marking method is as follows.
According to embodiments of the present disclosure, a remote diagnostician may draw a suspect box on an X-ray image, and the system obtains Length and Width data for the X-ray image. Fig. 9 schematically shows a schematic diagram for characterizing the size of an X-ray image. As shown in fig. 9, the lower left end point of the X-ray image may be set as the origin O of coordinates, the Length (Length) of the X-ray image may be set as L 1, and the Width (Width) may be set as W 1. The position information of the suspect frame is (x 1,y1),L2,W2, where L 2 is the length of the suspect frame and W 2 is the width of the suspect frame).
After the mounting position of the camera is determined, the Length (Length) and Width (Width) of the visible light image are fixed according to an embodiment of the present disclosure, and fig. 10 schematically illustrates a schematic diagram for characterizing the size of the visible light image. As shown in fig. 10, the lower left end point of the visible light image may be set as the origin O of coordinates, the Length (Length) of the visible light image may be set as L 3, and the Width (Width) may be set as W 3. The calculated position information of the suspected frame of the visible light image of the package is (x 2,y2),L4,W4, wherein L 4 is the length of the suspected frame, and W 4 is the width of the suspected frame).
Because a plurality of small packages may be included in one visible light image shot by the camera, a photograph including a suspected object may be obtained by searching for shooting time, and a suspected frame may be marked on a first package, so as to avoid the suspected frame being marked between the plurality of small packages, and equal proportion calculation may not be performed according to lengths of the X-ray image and the visible light image. Since the length of the visible light image is fixed, the length of the object in the scanned image and the actual scanned package have a certain proportional relationship, for example, the larger the object in the scanned image obtained by actual scanning. The length of the scanned image is dynamically changed, and in order to ensure the accuracy of the suspected suspicion frame, the pixel mapping relation between the scanned image and the visible light image of the package needs to be determined.
According to an embodiment of the present disclosure, marking a suspected object in a visible light image according to a marking position of the suspected object in a scanned image may include the following steps.
First, a pixel mapping relation K between a scanned image and a wrapped visible light image is predetermined. The length of the X-ray image corresponding to the entire visible light image can be obtained in a calibrated manner, and the length can be measured in units of pixels. After the position of the camera is fixed, the shooting range is fixed, a marker can be selected on the assumption that the length of the wrapped visible light image is L 3, the length range of the whole wrapped visible light image is just full, the marker is scanned and imaged through an X-ray machine, an X-ray image (scanning image) of the marker can be obtained, the pixel length occupied by the marker on the scanning image can be known to be L 0 according to the X-ray image of the marker, and namely the pixel mapping relation K between the wrapped visible light image and the X-ray scanning image can be determined to be L 3/L0.
And secondly, as the width range of the object scanned by the X-ray machine generally does not exceed the width range of the belt, the width of the wrapped visible light image is generally larger than the width of the belt conveyor, after the installation position of the camera is determined, the distance between the upper edge and the lower edge of the wrapped visible light image and the upper edge and the lower edge of the belt conveyor can be calculated, and the distance is c and d respectively.
Fig. 11 schematically shows another schematic for characterizing the visible light image size. As shown in fig. 11, i.e., c represents the width of the interval between the upper edge of the belt in the visible light image and the upper edge of the visible light image, and d represents the width of the interval between the lower edge of the belt in the visible light image and the lower edge of the visible light image. Specifically, c and d can be further described with reference to the visible light image shown in fig. 8. As shown in fig. 8, in the visible light image, the wrapping is run on the belt, the upper edge of the visible light image is spaced apart from the upper edge of the belt by a width c, and the lower edge of the visible light image is spaced apart from the lower edge of the belt by a width d.
Third, when L 3>L1 is reached, the visible light image of the package corresponding to the X-ray image is displayed in one visible light image, and the values of X 2,y2),L4,W4 are calculated respectively (the suspected frame can be marked on the visible light image of the package ,x2=(x1/L1)*L3*(L3/L0),y2=(y1/W1)*(W3-c-d)+d,L4=(L2/L1)*L3*(L3/L0),W4=(W2/W1)*(W3-c-d).
When L 3<L1 is carried out, the wrapped visible light image corresponding to the X-ray image is divided into a plurality of visible light images to be displayed, the value L 0 mapped in the first step can be used for calculating and dividing the visible light images into a plurality of visible light images, the value L 0/L1 is rounded, then the rounded value is +1, the plurality of visible light images are spliced, and the suspected frame is marked on the spliced wrapped visible light images in the same way. Taking the package visible light image as an example, the package visible light image is divided into two visible light images, as shown in fig. 4, the time when the package reaches the position a is t 1, the time when the package moves forward on the belt for a distance of L 0 is set as t 2,t2=t1+L0/v, and then the second visible light image can be found according to the value of the time t 2.
According to the embodiment of the disclosure, the marking position of the suspected object in the visible light image is determined according to the following formula according to the marking position of the suspected object in the scanned image, the size information of the scanned image and the pixel mapping relation:
X2=(x1/L1)*L3*K,y2=(y1/W1)*(W3-c-d)+d,
L4=(L2/L1)*L3*K,W4=(W2/W1)*(W3-c-d);
The coordinates of the suspect frame in the scanned image are (x 1,y1),L1 is the length of the scanned image, W 1 is the width of the scanned image, L 2 is the length of the suspect frame in the scanned image, W 2 is the width of the suspect frame in the scanned image, and the coordinates of the suspect frame in the visible image are (x 2,y2),L3 is the length of the visible image, W 3 is the width of the visible image, L 4 is the length of the suspect frame in the visible image, W 4 is the width of the suspect frame in the visible image, K is the pixel mapping relation, c represents the interval width between the upper edge of the belt in the visible image and the upper edge of the visible image, and d represents the interval width between the lower edge of the belt in the visible image and the lower edge of the visible image.
According to embodiments of the present disclosure, two scans as shown in fig. 5 and 7 may be obtained under a dual view angle. And drawing a suspected frame by a remote diagrammer at a main view angle, wherein the marking method of the visible light image of the package is the same as the marking method under the single view angle. The remote diagonals draw the suspicion frame at the secondary view angle, the secondary view angle X image is shown in fig. 7 with the suspicion frame, the visible light image marked with the suspicion frame is shown in fig. 8, and a specific example marking method is as follows.
According to the embodiment of the disclosure, since the camera can only shoot a visible light image of one view angle, which is equivalent to an image of a main view angle of security inspection equipment, and the visible light image is shot in the vertical direction, the height of the luggage is not displayed, so that the marking method only calculates x coordinate data, the calculation method is the same as above, and y is the width W 3 of the visible light image. The position of a suspected frame (corresponding to the side of the package) on the secondary view of the X-ray image is mapped to a certain area on the visible light image.
Through the embodiment of the disclosure, in terms of accuracy, when a unpacking inspection is performed by a unpacking inspector, the position of the contraband can be positioned through the suspected frame on the visible light image of the appearance of the package and the suspected frame on the X-ray image. Furthermore, the method can also be combined with AI recognition results and voice prompts of a diagraph, so that the contraband positions can be rapidly and accurately positioned, and the contraband in the package can be searched.
Through the embodiment of the disclosure, in the aspect of security inspection efficiency, the centralized graph judging station can display and judge graphs in a real-time mode, and the centralized graph judging station can ensure that graph judgment is performed synchronously with security inspection equipment in real time, and compared with a local graph judging mode, graph judgment delay can not be generated, so that the method is effectively applicable to scenes with high requirements on real-time performance, such as subway security inspection. Meanwhile, the suspected frame is also overlapped on the visible light image of the appearance of the package, so that the efficiency of locating suspected articles when a local security inspector unpacks and inspects the package can be improved, and the cooperation effect of the local security inspector and the remote image judgment can be achieved.
Fig. 12 schematically shows a block diagram of an image processing apparatus according to an embodiment of the present disclosure.
As shown in fig. 12, the image processing apparatus 400 includes an acquisition module 410, a determination module 420, and a marking module 430.
The acquiring module 410 is configured to acquire a scan image of a suspected object, where the scan image is obtained by scanning an object to be inspected through a security inspection device.
The determining module 420 is configured to determine a visible light image corresponding to the scanned image according to a starting scanning time of the scanned image, where the visible light image is acquired by a visible light image acquisition device.
The marking module 430 is configured to mark the suspected object in the visible light image according to the marking position of the suspected object in the scanned image.
Through marking suspects in the visible light image, cooperation of unpacking inspection and remote judgment can be realized, and local staff is helped to quickly and accurately find unpacking inspection packages and suspects in the packages.
According to an embodiment of the present disclosure, the image processing apparatus 400 further includes a display module for displaying the visible light image marked with the suspected object on the electronic device of the inspection station to indicate the position of the suspected object.
According to an embodiment of the present disclosure, the determining module 420 includes a first acquiring unit, a first determining unit, and a second determining unit.
The first acquisition unit is used for acquiring the transmission time length of the checked object in the security inspection equipment.
The first determining unit is used for determining the acquisition time of the visible light image according to the transmission time of the checked object in the security inspection device and the starting scanning time of the scanning image.
The second determining unit is used for determining the visible light image corresponding to the scanning image from the images acquired by the visible light image acquisition device according to the acquisition time of the visible light image.
According to an embodiment of the present disclosure, the marking module 430 includes a second acquisition unit, a third determination unit, and a marking unit.
The second acquisition unit is used for acquiring a pixel mapping relation between the scanning image and the visible light image.
The third determining unit is used for determining the mark position of the suspected object in the visible light image according to the mark position of the suspected object in the scanned image, the size information of the scanned image and the pixel mapping relation.
And the marking unit is used for marking the suspected substance in the visible light image according to the marking position of the suspected substance in the visible light image.
According to the embodiment of the disclosure, the marking position of the suspected object in the visible light image is determined according to the following formula according to the marking position of the suspected object in the scanned image, the size information of the scanned image and the pixel mapping relation:
x2=(x1/L1)*L3*K,y2=(y1/W1)*(W3-c-d)+d,
L4=(L2/L1)*L3*K,W4=(W2/W1)*(W3-c-d).
The coordinates of the suspect frame in the scanned image are (x 1,y1),L1 is the length of the scanned image, W 1 is the width of the scanned image, L 2 is the length of the suspect frame in the scanned image, W 2 is the width of the suspect frame in the scanned image, the coordinates of the suspect frame in the visible image are (x 2,y2),L3 is the length of the visible image, W 3 is the width of the visible image, L 4 is the length of the suspect frame in the visible image, W 4 is the width of the suspect frame in the visible image, K is the pixel mapping relation, c represents the interval width between the upper edge of the belt in the visible image and the upper edge of the visible image, and d represents the interval width between the lower edge of the belt in the visible image and the lower edge of the visible image.
According to embodiments of the present disclosure, the acquiring module 410 is configured to acquire a scanned image of the suspected object from the image judging station and/or acquire a scanned image of the suspected object from the automatic image judging server.
Any number of modules, sub-modules, units, sub-units, or at least some of the functionality of any number of the sub-units according to embodiments of the present disclosure may be implemented in one module. Any one or more of the modules, sub-modules, units, sub-units according to embodiments of the present disclosure may be implemented as split into multiple modules. Any one or more of the modules, sub-modules, units, sub-units according to embodiments of the present disclosure may be implemented at least in part as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system-on-chip, a system-on-substrate, a system-on-package, an Application Specific Integrated Circuit (ASIC), or in any other reasonable manner of hardware or firmware that integrates or encapsulates the circuit, or in any one of or a suitable combination of three of software, hardware, and firmware. Or one or more of the modules, sub-modules, units, sub-units according to embodiments of the present disclosure may be at least partially implemented as computer program modules, which, when executed, may perform the corresponding functions.
For example, any of the acquisition module 410, determination module 420, and tagging module 430 may be combined in one module to be implemented, or any of the modules may be split into multiple modules. Or at least some of the functionality of one or more of the modules may be combined with, and implemented in, at least some of the functionality of other modules. According to embodiments of the present disclosure, at least one of the acquisition module 410, the determination module 420, and the tagging module 430 may be implemented at least in part as hardware circuitry, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system-on-chip, a system-on-substrate, a system-on-package, an Application Specific Integrated Circuit (ASIC), or by hardware or firmware in any other reasonable way of integrating or packaging the circuitry, or in any one of or a suitable combination of any of three implementations of software, hardware, and firmware. Or at least one of the acquisition module 410, the determination module 420 and the marking module 430 may be at least partially implemented as computer program modules which, when executed, perform the corresponding functions.
Fig. 13 schematically illustrates a block diagram of a computer system suitable for implementing the image processing methods and apparatus according to embodiments of the present disclosure. The computer system illustrated in fig. 13 is merely an example, and should not be construed as limiting the functionality and scope of use of the embodiments of the present disclosure.
As shown in FIG. 13, computer system 500 includes a processor 510 and a computer-readable storage medium 520. The computer system 500 may perform methods according to embodiments of the present disclosure.
In particular, processor 510 may include, for example, a general purpose microprocessor, an instruction set processor and/or an associated chipset and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), or the like. Processor 510 may also include on-board memory for caching purposes. Processor 510 may be a single processing unit or multiple processing units for performing the different actions of the method flows according to embodiments of the disclosure.
Computer-readable storage medium 520, which may be, for example, a non-volatile computer-readable storage medium, specific examples include, but are not limited to: magnetic storage devices such as magnetic tape or hard disk (HDD); optical storage devices such as compact discs (CD-ROMs); a memory, such as a Random Access Memory (RAM) or a flash memory; etc.
The computer-readable storage medium 520 may include a computer program 521, which computer program 521 may include code/computer-executable instructions that, when executed by the processor 510, cause the processor 510 to perform a method according to an embodiment of the present disclosure or any variation thereof.
The computer program 521 may be configured with computer program code comprising, for example, computer program modules. For example, in an example embodiment, code in computer program 521 may include one or more program modules, including, for example, 521A, 521B. It should be noted that the division and number of modules is not fixed, and that a person skilled in the art may use suitable program modules or combinations of program modules according to the actual situation, which when executed by the processor 510, enable the processor 510 to perform the method according to embodiments of the present disclosure or any variations thereof.
At least one of the acquisition module 410, the determination module 420, and the tagging module 430 may be implemented as computer program modules described with reference to fig. 13, which when executed by the processor 510, may implement the respective operations described above, in accordance with embodiments of the present invention.
The present disclosure also provides a computer-readable storage medium that may be embodied in the apparatus/device/system described in the above embodiments; or may exist alone without being assembled into the apparatus/device/system. The computer-readable storage medium carries one or more programs which, when executed, implement methods in accordance with embodiments of the present disclosure.
According to embodiments of the present disclosure, the computer-readable storage medium may be a non-volatile computer-readable storage medium, which may include, for example, but is not limited to: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Those skilled in the art will appreciate that the features recited in the various embodiments of the disclosure and/or in the claims may be combined in various combinations and/or combinations, even if such combinations or combinations are not explicitly recited in the disclosure. In particular, the features recited in the various embodiments of the present disclosure and/or the claims may be variously combined and/or combined without departing from the spirit and teachings of the present disclosure. All such combinations and/or combinations fall within the scope of the present disclosure.
While the present disclosure has been shown and described with reference to certain exemplary embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the present disclosure as defined by the appended claims and their equivalents. The scope of the disclosure should, therefore, not be limited to the above-described embodiments, but should be determined not only by the following claims, but also by the equivalents of the following claims.

Claims (12)

1. An image processing method, comprising:
Acquiring a scanning image marked with a suspected object, wherein the scanning image is obtained by scanning an inspected object through security inspection equipment;
Determining a visible light image corresponding to the scanning image according to the starting scanning time of the scanning image, wherein the visible light image is acquired by a visible light image acquisition device on the checked object; and
Marking the suspected object in the visible light image according to the marking position of the suspected object in the scanning image;
Wherein, according to the starting scanning time of the scanning image, determining the visible light image corresponding to the scanning image includes:
Acquiring the transmission time length of the checked object in the security inspection equipment;
determining the acquisition time of the visible light image according to the transmission time of the checked object in the security inspection equipment and the starting scanning time of the scanning image; and
And determining the visible light image corresponding to the scanning image from the images acquired by the visible light image acquisition equipment according to the acquisition time of the visible light image.
2. The method of claim 1, further comprising:
And displaying the visible light image marked with the suspicious object on the electronic equipment of the inspection station to indicate the position of the suspicious object.
3. The method of claim 1, wherein marking the suspect in the visible light image according to the marking location of the suspect in the scanned image comprises:
acquiring a pixel mapping relation between the scanning image and the visible light image;
Determining the mark position of the suspected object in the visible light image according to the mark position of the suspected object in the scanning image, the size information of the scanning image and the pixel mapping relation; and
And marking the suspected object in the visible light image according to the marking position of the suspected object in the visible light image.
4. The method of claim 3, wherein the marking position of the suspected object in the visible light image is determined according to the marking position of the suspected object in the scanned image, the size information of the scanned image and the pixel mapping relation according to the following formula:
x2=(x1/L1)*L3*K,y2=(y1/W1)*(W3-c-d)+d,
L4=(L2/L1)*L3*K,W4=(W2/W1)*(W3-c-d);
wherein coordinates of a suspect frame in the scan image are (x 1,y1),L1 is a length of the scan image, W 1 is a width of the scan image, L 2 is a length of the suspect frame in the scan image, W 2 is a width of the suspect frame in the scan image, coordinates of the suspect frame in the visible image are (x 2,y2),L3 is a length of the visible image, W 3 is a width of the visible image, L 4 is a length of the suspect frame in the visible image, W 4 is a width of the suspect frame in the visible image, K is the pixel map relationship, c represents a width of a gap between an upper edge of a belt in the visible image and an upper edge of the visible image, and d represents a width of a gap between a lower edge of the belt in the visible image and a lower edge of the visible image.
5. The method of claim 1, wherein the acquiring a scanned image of the marked suspicious object comprises:
Acquiring a scanned image of a suspected object marked from a graph judging station; and/or
A scanned image of the marked suspected object from an automatic judgment server is acquired.
6. An image processing apparatus comprising:
The acquisition module is used for acquiring a scanning image marked with a suspected object, wherein the scanning image is obtained by scanning an inspected object through security inspection equipment;
the determining module is used for determining a visible light image corresponding to the scanning image according to the starting scanning time of the scanning image, wherein the visible light image is acquired by a visible light image acquisition device on the checked object; and
The marking module is used for marking the suspected object in the visible light image according to the marking position of the suspected object in the scanning image;
wherein the determining module comprises:
a first acquiring unit, configured to acquire a transmission time length of the inspected object in the security inspection device;
A first determining unit, configured to determine an acquisition time of the visible light image according to a transmission time of the inspected object in the security inspection device and a start scanning time of the scanned image; and
And the second determining unit is used for determining the visible light image corresponding to the scanning image from the images acquired by the visible light image acquisition equipment according to the acquisition time of the visible light image.
7. The apparatus of claim 6, further comprising:
And the display module is used for displaying the visible light image marked with the suspected object on the electronic equipment of the inspection station so as to indicate the position of the suspected object.
8. The apparatus of claim 6, wherein the tagging module comprises:
a second acquisition unit configured to acquire a pixel mapping relationship between the scan image and the visible light image;
A third determining unit, configured to determine a mark position of a suspected object in the visible light image according to the mark position of the suspected object in the scanned image, the size information of the scanned image, and the pixel mapping relationship; and
The marking unit is used for marking the suspected object in the visible light image according to the marking position of the suspected object in the visible light image.
9. The apparatus of claim 8, wherein the marking position of the suspected object in the visible light image is determined according to the marking position of the suspected object in the scanned image, the size information of the scanned image, and the pixel mapping relation according to the following formula:
x2=(x1/L1)*L3*K,y2=(y1/W1)*(W3-c-d)+d,
L4=(L2/L1)*L3*K,W4=(W2/W1)*(W3-c-d);
wherein coordinates of a suspect frame in the scan image are (x 1,y1),L1 is a length of the scan image, W 1 is a width of the scan image, L 2 is a length of the suspect frame in the scan image, W 2 is a width of the suspect frame in the scan image, coordinates of the suspect frame in the visible image are (x 2,y2),L3 is a length of the visible image, W 3 is a width of the visible image, L 4 is a length of the suspect frame in the visible image, W 4 is a width of the suspect frame in the visible image, K is the pixel map relationship, c represents a width of a gap between an upper edge of a belt in the visible image and an upper edge of the visible image, and d represents a width of a gap between a lower edge of the belt in the visible image and a lower edge of the visible image.
10. The apparatus of claim 6, wherein the acquisition module is to:
Acquiring a scanned image of a suspected object marked from a graph judging station; and/or
A scanned image of the marked suspected object from an automatic judgment server is acquired.
11. A computer system, comprising:
one or more processors;
A computer readable storage medium storing one or more programs,
Wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of any of claims 1 to 5.
12. A computer readable storage medium having stored thereon executable instructions which when executed by a processor cause the processor to implement the method of any of claims 1 to 5.
CN201910767309.6A 2019-08-19 2019-08-19 Image processing method, device, computer system and readable storage medium Active CN112396649B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910767309.6A CN112396649B (en) 2019-08-19 2019-08-19 Image processing method, device, computer system and readable storage medium
PCT/CN2020/089633 WO2021031626A1 (en) 2019-08-19 2020-05-11 Method and device for image processing, computer system and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910767309.6A CN112396649B (en) 2019-08-19 2019-08-19 Image processing method, device, computer system and readable storage medium

Publications (2)

Publication Number Publication Date
CN112396649A CN112396649A (en) 2021-02-23
CN112396649B true CN112396649B (en) 2024-05-28

Family

ID=74603644

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910767309.6A Active CN112396649B (en) 2019-08-19 2019-08-19 Image processing method, device, computer system and readable storage medium

Country Status (2)

Country Link
CN (1) CN112396649B (en)
WO (1) WO2021031626A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113393429B (en) * 2021-06-07 2023-03-24 杭州睿影科技有限公司 Calibration method for outlet position of target detection equipment and target detection equipment
CN117590479A (en) * 2022-08-08 2024-02-23 同方威视技术股份有限公司 Suspected article positioning system and method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108347435A (en) * 2017-12-25 2018-07-31 王方松 Security detection equipment and its collecting method for public place
CN109959969A (en) * 2017-12-26 2019-07-02 同方威视技术股份有限公司 Assist safety inspection method, device and system
CN110031909A (en) * 2019-04-18 2019-07-19 西安天和防务技术股份有限公司 Safe examination system and safety inspection method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104849770B (en) * 2015-06-02 2016-04-06 北京航天易联科技发展有限公司 A kind of formation method based on passive Terahertz safety check imaging system
CN108846823A (en) * 2018-06-22 2018-11-20 西安天和防务技术股份有限公司 A kind of fusion method of terahertz image and visible images

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108347435A (en) * 2017-12-25 2018-07-31 王方松 Security detection equipment and its collecting method for public place
CN109959969A (en) * 2017-12-26 2019-07-02 同方威视技术股份有限公司 Assist safety inspection method, device and system
CN110031909A (en) * 2019-04-18 2019-07-19 西安天和防务技术股份有限公司 Safe examination system and safety inspection method

Also Published As

Publication number Publication date
WO2021031626A1 (en) 2021-02-25
CN112396649A (en) 2021-02-23

Similar Documents

Publication Publication Date Title
CN111612020B (en) Positioning method for abnormal object to be detected, security inspection analysis equipment and security inspection analysis system
CN105390021B (en) The detection method and device of parking space state
CN112396649B (en) Image processing method, device, computer system and readable storage medium
TR201910987T4 (en) Train type identification method and system and security control method and system.
US10810437B2 (en) Security check system and method
JP2011502912A (en) Self-contained inspection apparatus and method for transport containers
CN207689691U (en) Baggage clearance checking system
CN109785446B (en) Image recognition system and method thereof
CN104986129B (en) The monitoring system and its method of special-purpose vehicle job state
CN112649436A (en) Container image acquisition and inspection system for non-fixed scene
CN114295649B (en) Information association method, device, electronic equipment and storage medium
CN103481910A (en) Train part image collecting system and train part anomaly detection system
CN110210338A (en) The dressing information of a kind of pair of target person carries out the method and system of detection identification
CN209895386U (en) Image recognition system
CN113706497B (en) Intelligent contraband identification device and system
CN113945990A (en) Safety inspection method, device and system for passenger car
US11009604B1 (en) Methods for detecting if a time of flight (ToF) sensor is looking into a container
CN113391776A (en) Article information display method, article information display device, electronic device, and storage medium
CN205754595U (en) A kind of tunnel high definition holographic imaging apparatus
CN113033545A (en) Empty tray identification method and device
US11748691B2 (en) Multi-threat maritime detection system
KR101786322B1 (en) System and method for monitering radiation based on imaging
CN116311085B (en) Image processing method, system, device and electronic equipment
CN114170318A (en) Image processing method, apparatus, system, medium, and electronic device
KR20190075283A (en) System and Method for detecting Metallic Particles

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant