CN110751767A - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
CN110751767A
CN110751767A CN201911054796.8A CN201911054796A CN110751767A CN 110751767 A CN110751767 A CN 110751767A CN 201911054796 A CN201911054796 A CN 201911054796A CN 110751767 A CN110751767 A CN 110751767A
Authority
CN
China
Prior art keywords
information
target
image
module
image processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911054796.8A
Other languages
Chinese (zh)
Inventor
王宇
李洁
张沛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China United Network Communications Group Co Ltd
Original Assignee
China United Network Communications Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China United Network Communications Group Co Ltd filed Critical China United Network Communications Group Co Ltd
Priority to CN201911054796.8A priority Critical patent/CN110751767A/en
Publication of CN110751767A publication Critical patent/CN110751767A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C9/00Individual registration on entry or exit
    • G07C9/00174Electronically operated locks; Circuits therefor; Nonmechanical keys therefor, e.g. passive or active electrical keys or other data carriers without mechanical keys
    • G07C9/00563Electronically operated locks; Circuits therefor; Nonmechanical keys therefor, e.g. passive or active electrical keys or other data carriers without mechanical keys using personal physical data of the operator, e.g. finger prints, retinal images, voicepatterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation

Abstract

The embodiment of the invention provides an image processing method and device, relates to the field of data processing, and aims to transmit data acquired at the front end and processed at the background through a customized interface, so that the data transmission speed is increased. The wireless gate acquires image information of a user on a gate channel; ranging the image information, and selecting the target character information closest to the gate; segmenting the face part of the target character information to obtain target face information; adding a sequence code to the target face information to obtain target information; and sending the target information to the base station. The base station acquires a sequence code in the target information and determines a customized interface according to the sequence code; and transmitting the target information to the core network through the customized interface so that the core network sends the target information to the edge cloud server. The method comprises the steps that an edge cloud server obtains target information sent by a core network; and comparing the target information with the face information in the image database to obtain a comparison result. The embodiment of the application is applied to image processing.

Description

Image processing method and device
Technical Field
The embodiment of the invention relates to the field of data processing, in particular to an image processing method and device.
Background
An Automatic Fare Collection (AFC) system is a core subsystem of operation service of an urban rail transit system, integrates various high and new technologies such as a computer technology, an electromechanical integration technology and a mode identification technology, and realizes automation of the whole processes of ticket selling, ticket checking, charging, statistics, clearing, management and the like of urban rail transit. The AFC system is divided into five layers according to functions, and comprises an AFC Cleaning Center (ACC) which is mainly responsible for accounting cleaning, data management, passenger flow analysis, operation parameter management, safety management, ticket card issuance, ticket management, system monitoring, public information release, interconnection with an external system and other functions; a line central computer system (LC) mainly responsible for monitoring and managing the operation of the AFC system and processing various service reports; a station computer System (SC) mainly responsible for monitoring and managing station terminal equipment; station terminal equipment (SE) for providing various ticket selling/checking services to passengers; the ticket is a boarding voucher held by the passenger.
Firstly, the data security of a server database in an AFC system in the existing rail transit is poor, and secondly, although the existing AFC system adopts private network transmission, the data exchange transmission speed is slow, so that a user passes through the AFC system slowly at a time, the experience is poor, and a queuing phenomenon occurs when the passenger flow is large.
Disclosure of Invention
The embodiment of the invention provides an image processing method and device, which can transmit data acquired by a front end and processed by a background through a customized interface, so that the data transmission speed is increased.
In a first aspect, an image processing method is provided for a wireless gate, including the following steps: acquiring image information of a user on a gate channel; ranging the image information, and selecting the target character information closest to the gate; segmenting the face part of the target character information to obtain target face information; adding a sequence code to the target face information to obtain target information; and sending the target information to a base station, wherein a sequence code in the target information is used for indicating the base station to return the target information to a core network through a customized interface preferentially, the core network sends the target information to an edge cloud server, and the edge cloud server compares the target information with face information in an image database to obtain a comparison result.
In a second aspect, an image processing method is provided for a base station, including the steps of: receiving target information sent by a wireless gate, wherein the target information is generated by adding sequence codes into target face information obtained by image segmentation of user image information acquired on a gate channel by the wireless gate; acquiring a sequence code in target information, and determining a customized interface according to the sequence code; and transmitting the target information to a core network through a customized interface so that the core network can send the target information to an edge cloud server, and comparing the target information with the face information in the image database by the edge cloud server to obtain a comparison result.
In a third aspect, an image processing method is provided, where the image processing method is used for an edge cloud server, and includes the following steps: acquiring target information sent by a core network, wherein the target information is generated by adding sequence codes after image segmentation is carried out on image information of a user acquired on a gate channel by a wireless gate, and the target information is sent to a base station by the wireless gate and then sent to the core network by the base station through a customized interface; and comparing the target information with the face information in the image database to obtain a comparison result.
In the scheme, the wireless gate acquires image information of a user on a gate channel; ranging the image information, and selecting the target character information closest to the gate; segmenting the face part of the target character information to obtain target face information; adding a sequence code to the target face information to obtain target information; and sending the target information to the base station. The base station acquires a sequence code in the target information and determines a customized interface according to the sequence code; and transmitting the target information to the core network through the customized interface so that the core network sends the target information to the edge cloud server. The method comprises the steps that an edge cloud server obtains target information sent by a core network; and comparing the target information with the face information in the image database to obtain a comparison result. Therefore, the sequence code is added into the target face information to indicate that the base station preferentially transmits the target information back to the core network through the customized interface, so that the transmission speed of the target information between the front-end wireless gate and the background edge cloud server is improved, and the queuing phenomenon when the passenger flow is large can be further reduced.
In a fourth aspect, an image processing apparatus is provided, which is used for a wireless gate or a chip on the wireless gate, and includes: the image acquisition module is used for acquiring image information of a user on the gate channel; the image segmentation module is used for ranging the image information acquired by the image acquisition module and selecting the target character information closest to the gate; the image segmentation module is also used for segmenting the face part of the target character information to obtain target face information; the processing module is used for adding sequence codes to the target face information obtained by the image segmentation module to obtain target information; and the sending module is used for sending the target information obtained by the processing module to the base station, wherein the sequence code in the target information is used for indicating the base station to preferentially transmit the target information back to the core network through the customized interface, the core network sends the target information to the edge cloud server, and the edge cloud server compares the target information with the face information in the image database to obtain a comparison result.
In a fifth aspect, an image processing apparatus is provided, which is used for a base station or a chip on the base station, and includes: the receiving module is used for receiving target information sent by the wireless gate, wherein the target information is generated by adding sequence codes to target face information obtained by image segmentation of user image information acquired by the wireless gate on a gate channel; the determining module is used for acquiring the sequence code in the target information received by the receiving module and determining the customized interface according to the sequence code; and the transmission module is used for transmitting the target information to the core network through the customized interface determined by the determination module so that the core network can send the target information to the edge cloud server, and the edge cloud server compares the target information with the face information in the image database to obtain a comparison result.
The sixth aspect provides an image processing device, which is used for an edge cloud server or a chip on the edge cloud server, and comprises an acquisition module, a sequence code generation module and a data processing module, wherein the acquisition module is used for acquiring target information sent by a core network, the target information is generated by adding the sequence code after image segmentation is carried out on image information of a user acquired on a gate channel by a wireless gate, and the target information is sent to a base station by the wireless gate and then sent to the core network by the base station through a customized interface; and the comparison module is used for comparing the target information acquired by the acquisition module with the face information in the image database to acquire a comparison result.
In a seventh aspect, an image processing apparatus is provided, which includes a processor, and when the image processing apparatus is operated, the processor executes a computer to execute instructions, so that the image processing apparatus executes the image processing method.
In an eighth aspect, there is provided a computer storage medium comprising instructions which, when run on a computer, cause the computer to perform the image processing method as described above.
In a ninth aspect, a computer program product is provided, the computer program product comprising instruction code for performing the image processing method as described above.
It is understood that any one of the image processing apparatus, the computer storage medium, or the computer program product provided above is used for executing the image processing method provided above, and therefore, the beneficial effects achieved by the image processing apparatus, the computer storage medium, or the computer program product may refer to the beneficial effects of the image processing method provided above and the corresponding solutions in the following detailed description, and are not repeated herein.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a schematic diagram of an AFC system architecture according to an embodiment of the present invention;
fig. 2 is a schematic flowchart of an image processing method according to an embodiment of the present invention;
FIG. 3 is a flowchart illustrating a target feature storage method according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of an image processing apparatus according to a first embodiment of the present invention;
fig. 5 is a schematic structural diagram of an image processing apparatus according to a second embodiment of the present invention;
fig. 6 is a schematic structural diagram of an image processing apparatus according to a third embodiment of the present invention;
fig. 7 is a schematic structural diagram of an image processing apparatus according to a fourth embodiment of the present invention.
Fig. 8 is a schematic structural diagram of an image processing apparatus according to a fifth embodiment of the present invention;
fig. 9 is a schematic structural diagram of an image processing apparatus according to a sixth embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The AFC system is a core subsystem of the operation service of the urban rail transit system, integrates various high and new technologies such as a computer technology, a mechanical-electrical integration technology, a mode identification technology and the like, and realizes the automation of the whole process of ticket selling, ticket checking, charging, statistics, clearing, management and the like of the urban rail transit. The AFC system generally comprises a five-layer structure, each layer comprises relatively independent functions, and all layers are connected by a communication system to form a complete system, as shown in FIG. 1, the AFC system comprises a clearing management center system 11 of a first layer, line central computer systems 121-12 n of a second layer, station computer systems 131-13 n of a third layer, station terminal equipment 141-14 n of a fourth layer and tickets 15 of a fifth layer. The clearing management center system 11 is a management center of an AFC system under the urban rail transit networked operation condition, and is mainly responsible for functions of accounting clearing, data management, passenger flow analysis, operation parameter management, safety management, ticket card issuance, ticket management, system monitoring, public information release, interconnection with an external system and the like; the clearing management center system 11 generally comprises a communication preposition subsystem, a clearing management subsystem, a ticket business management subsystem, an operation management subsystem, a safety management subsystem, a ticket card issuing subsystem, an information management subsystem, a system management subsystem, a test training subsystem, a decision support subsystem, a remote disaster preparation system and other subsystems; generally, only one clearing management center system 11 is built for the rail transit investment of each city. The line central computer system is an operation management center of the AFC system and a ticket transaction data storage, management and analysis center, and is mainly responsible for monitoring and managing the operation of the AFC system and processing various service reports; generally, each urban rail transit line is provided with a line central computer system. The station computer system is used for receiving the running parameters and the ticket business parameters issued by the line central computer system and transmitting the running parameters and the ticket business parameters to each terminal device; the system is also used for receiving ticket business transaction data uploaded by the terminal equipment and the like, and forwarding the data to the line central computer system, and is mainly responsible for monitoring and managing the station terminal equipment; generally, each urban rail transit station is provided with a set of station computer system. The station terminal equipment comprises various operation terminals of an AFC system, such as an automatic ticket vending machine, an automatic ticket checking machine (gate machine), a ticket house ticket selling (supplementing) machine, an automatic value adding (inquiring) machine and the like, and is used for providing various ticket selling/checking services for passengers; the station terminal equipment is installed in an exhibition hall of a rail transit station and is connected to a station computer system through a station network. The ticket 15 is a boarding voucher held by the passenger.
The data security of a server database in an AFC system in the existing rail transit is poor, although the AFC system adopts private network transmission, the data exchange transmission speed is slow, a user passes through the AFC system slowly by a single person at a time, the experience is poor, and a queuing phenomenon occurs when the passenger flow is large.
In view of the foregoing problems, an embodiment of the present application provides an image processing method, which is shown in fig. 2 and specifically includes the following steps:
201. the wireless gate acquires image information of a user on a gate channel.
The wireless gate is powered in a wireless open-wire mode, for example, a storage battery is used for supplying power, and the wireless gate can be charged in a non-operation period.
Furthermore, the mechanical ultra-high-definition camera is driven by the camera lifting module to move, scan and shoot image information of the user. Specifically, the mechanical ultra-high-definition camera is driven to move up and down, left and right or in other motion modes to scan and shoot image information of a user according to actual requirements.
Further, image information of the user is displayed on the ultra high definition screen.
202. The wireless gate measures the distance of the image information and selects the target character information closest to the gate.
Specifically, an infrared ranging sensor is installed in the wireless gate and used for ranging image information, and target person information closest to the gate is further selected.
203. And the wireless gate machine divides the face part of the target character information to obtain the target face information.
Specifically, an Augmented Reality (AR) face database is arranged in the wireless gate, and is used for the wireless gate to identify the face part in the target person information, and then segment the face part by using an image segmentation algorithm, remove redundant information, and obtain the target face information.
204. And adding a sequence code to the target face information by the wireless gate to obtain the target information.
For example, the wireless gate adds a sequence code M to the face information to obtain target information carrying M.
205. The wireless gate machine sends the target information to the base station, the base station obtains the sequence code in the target information, the customized interface is determined according to the sequence code, and the target information is transmitted to the core network through the customized interface.
Specifically, the base station receives target information transmitted by the wireless gate, firstly identifies the target information, and if the target information is identified to carry a sequence code, preferentially transmits the target information carrying the sequence code to the core network through the customized interface so that the core network can send the target information to the edge cloud server. The core network is connected with the edge cloud server through the network cable, so that the core network receives the target information and transmits the target information to the edge cloud server through the network cable.
Further, the base station transmits the target information to the core network through a first Virtual Local Area Network (VLAN) between the base station and the core network via a customized interface, specifically, the first VLAN is preconfigured, and for example, the first VLAN may be obtained in a default value, pre-stored, or rewritten by a background administrator.
206. And the edge cloud server compares the target information with the face information in the image database to obtain a comparison result.
First, the edge cloud server extracts a target feature value in the target information, and further, the edge cloud server extracts the target feature value in the target information by using an image feature value extraction algorithm, for example, a canny edge detection algorithm, a harris corner detection algorithm, a log spot detection algorithm, and the like.
Secondly, the edge cloud server compares the target characteristic value with the characteristic value of the face information in the image database to obtain a comparison result.
Optionally, at an initial stage of establishing the edge cloud server, an image database needs to be established in the edge cloud server, and a feature value of the face information needs to be entered, so that the present application provides a target feature storage method, which is used for storing the feature value of the face information in the image database by the edge cloud server, and as shown in fig. 3, the method specifically includes the following steps:
and S1, acquiring the identity card photo when the mobile phone number of the user is registered to be accessed to the network.
Specifically, the edge cloud server associates the image database with the mobile phone number registered by the user to acquire the identification card photo when the user registers to the network, for example, the image processing device may include an image frame acquisition module for acquiring the identification card photo when the mobile phone number of the user registers to the network.
And S2, segmenting the face part of the identification card photo to obtain the target face information.
Specifically, the edge cloud server identifies a face part in the identification card photo, segments the face part through an image segmentation algorithm, removes redundant information, and obtains target face information.
And S3, extracting the target characteristic value of the target face information.
Specifically, the method for extracting the target feature value of the target face information by the edge cloud server may refer to the above method for extracting the target feature value of the target information, for example, the image processing apparatus may include a calculation analysis module for extracting the target feature value of the target face information.
And S4, storing the target characteristic value and the mobile phone number of the user into an image database after being associated.
For example, the image processing apparatus may include a frame data storage module, configured to store the target feature value in an image database after associating the target feature value with a mobile phone number of the user.
And finally, the edge cloud server compares the target characteristic value with the characteristic value of the face information in the image database to obtain a mobile phone number associated with the target characteristic value in the image database, and then the mobile phone number is called to register the identity information when the mobile phone number is accessed to the network, so that the face recognition is completed.
Optionally, if it is determined that the user passing time on the gate channel exceeds the predetermined threshold, the wireless gate sends a call instruction to the edge cloud server, specifically, the predetermined threshold is preconfigured, and for example, the predetermined threshold may be obtained in a default value, pre-stored, or obtained by rewriting by a back-end manager.
And if the edge cloud server determines that the calling instruction of the wireless gate is received, transmitting the content in the extensible ultra-high-definition database to the core network, transmitting the content in the extensible ultra-high-definition database to the base station by the core network through the second VLAN, and forwarding the content to the wireless gate by the base station.
And the wireless gate acquires the content in the extensible ultra-high-definition database transmitted by the edge cloud server and displays the content on the ultra-high-definition screen.
In the scheme, the wireless gate acquires image information of a user on a gate channel; ranging the image information, and selecting the target character information closest to the gate; segmenting the face part of the target character information to obtain target face information; adding a sequence code to the target face information to obtain target information; and sending the target information to the base station. The base station acquires a sequence code in the target information and determines a customized interface according to the sequence code; and transmitting the target information to the core network through the customized interface so that the core network sends the target information to the edge cloud server. The method comprises the steps that an edge cloud server obtains target information sent by a core network; and comparing the target information with the face information in the image database to obtain a comparison result. Therefore, firstly, the sequence code is added into the target face information to indicate that the base station preferentially transmits the target information back to the core network through the customized interface, so that the transmission speed of the target information between the front-end wireless gate and the background edge cloud server is improved, and the queuing phenomenon when the passenger flow is large can be further reduced. Secondly, a first virtual local area network VLAN is marked out in the method and used for transmitting the target information, and a second virtual local area network VLAN is used for transmitting the content in the extensible ultra high definition database, so that the transmission speed of the target information between a wireless gate at the front end and a background edge cloud server is further improved. And thirdly, the face information of the identity card photo is stored in the image database of the edge cloud server after the characteristic value is extracted, so that the photo is prevented from being directly stored in the image database, and the safety of the database of the edge cloud server is improved. Finally, when no user passes through the gate channel, the content in the extensible ultra-high-definition database in the edge cloud server is displayed on the ultra-high-definition screen, and the extensibility of the ultra-high-definition screen is achieved.
In the embodiment of the present invention, the image processing apparatus may be divided into functional modules according to the method embodiments described above, for example, each functional module may be divided according to each function, or two or more functions may be integrated into one processing module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. It should be noted that, the division of the modules in the embodiment of the present invention is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
Referring to fig. 4, the present application provides an image processing apparatus for a wireless gate or a chip on the wireless gate, including: the image acquisition module 41 is used for acquiring image information of a user on a gate channel; an image segmentation module 42, configured to perform distance measurement on the image information acquired by the image acquisition module 41, and select target person information closest to a gate; the image segmentation module 42 is further configured to segment a face portion of the target person information to obtain target face information; a processing module 43, configured to add a sequence code to the target face information obtained by the image segmentation module 42 to obtain target information; a sending module 44, configured to send the target information obtained by processing by the processing module 43 to a base station, where a sequence code in the target information is used to indicate that the base station preferentially transmits the target information back to a core network through a customized interface, the core network sends the target information to an edge cloud server, and the edge cloud server compares the target information with face information in an image database to obtain a comparison result.
Optionally, the image acquisition module 41 is specifically configured to drive the mechanical ultra-high-definition camera to scan and shoot image information of the user through the camera lifting module.
Optionally, the sending module 44 is further configured to send, if it is determined that the user passing time on the gate channel does not exceed the predetermined threshold, a call instruction to the edge cloud server, where the call instruction is used to instruct the edge cloud server to transmit content in the extensible ultra high definition database; and a display module 45, configured to obtain content in the extensible ultra-high-definition database transmitted by the edge cloud server, and display the content on an ultra-high-definition screen. Optionally, a first virtual local area network VLAN is set between the base station and the core network for transmitting the target information.
In the case of an integrated module, an image processing apparatus for a wireless gate or a chip on a wireless gate includes: the device comprises a storage unit, a processing unit and an interface unit. The processing unit is used for controlling and managing the action of the image processing device. And the interface unit is responsible for information interaction between the image processing device and other equipment. And a storage unit in charge of storing program codes and data of the image processing apparatus.
Wherein, the processing unit may be a processor, the storage unit may be a memory, and the interface unit may be a communication interface.
The image processing apparatus for a wireless gate or a chip on a wireless gate is shown in fig. 5, and includes a processor 502, where the processor 502 is configured to execute application program codes, so as to implement the image processing method described in the embodiment of the present application.
The processor 502 may be a general-purpose Central Processing Unit (CPU), a microprocessor, an application-specific integrated circuit (ASIC), or one or more ics for controlling the execution of programs in accordance with the present disclosure.
As shown in fig. 5, the image processing apparatus for a wireless gate or a chip on a wireless gate may further include a memory 503.
The memory 503 may be, but is not limited to, a read-only memory (ROM) or other type of static storage device that can store static information and instructions, a Random Access Memory (RAM) or other type of dynamic storage device that can store information and instructions, an electrically erasable programmable read-only memory (EEPROM), a compact disc read-only memory (CD-ROM) or other optical disk storage, optical disk storage (including compact disc, laser disc, optical disc, digital versatile disc, blu-ray disc, etc.), magnetic disk storage media or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. The memory may be self-contained and coupled to the processor via a bus. The memory may also be integral to the processor.
The memory 503 is used for storing application program codes for executing the scheme of the application, and the processor 502 controls the execution. As shown in fig. 5, the image processing apparatus for a wireless gate or a chip on a wireless gate may further include a communication interface 501. The communication interface 501, the processor 502, and the memory 503 may be coupled to each other, for example, by a bus 504.
The communication interface 501 is used for information interaction with other devices, for example, supporting information interaction between an image processing apparatus used for a wireless gate or a chip on the wireless gate and other devices, for example, acquiring data from or transmitting data to other devices.
Referring to fig. 6, the present application provides an image processing apparatus for a base station or a chip on the base station, including: the receiving module 61 is configured to receive target information sent by the wireless gate, where the target information is generated by adding sequence codes to target face information obtained by image segmentation of image information of a user acquired by the wireless gate on a gate channel; a determining module 62, configured to obtain a sequence code in the target information received by the receiving module 61, and determine a customized interface according to the sequence code; a transmission module 63, configured to transmit the target information to a core network through the customized interface determined by the determination module 62, so that the core network sends the target information to an edge cloud server, and the edge cloud server compares the target information with face information in an image database to obtain a comparison result.
Optionally, the transmission module 63 is specifically configured to transmit the target information to the core network through the customized interface in a first virtual local area network VLAN between the base station and the core network.
Optionally, the receiving module 61 is further configured to receive content in an extensible ultra high definition database transmitted by the core network through a second VLAN, and send the content in the extensible ultra high definition database to the wireless gate, where the core network receives the content in the extensible ultra high definition database sent by the edge server.
In the case of an integrated module, an image processing apparatus for a base station or a chip on a base station includes: the device comprises a storage unit, a processing unit and an interface unit. The processing unit is used for controlling and managing the action of the image processing device. And the interface unit is responsible for information interaction between the image processing device and other equipment. And a storage unit in charge of storing program codes and data of the image processing apparatus.
Wherein, the processing unit may be a processor, the storage unit may be a memory, and the interface unit may be a communication interface.
The image processing apparatus for a base station or a chip on a base station is shown in fig. 7, and includes a processor 702, where the processor 702 is configured to execute an application program code, so as to implement the image processing method described in this embodiment of the present application.
The processor 702 may be a general-purpose Central Processing Unit (CPU), a microprocessor, an application-specific integrated circuit (ASIC), or one or more ics for controlling the execution of programs in accordance with the present disclosure.
As shown in fig. 7, the image processing apparatus for a base station or a chip on a base station may further include a memory 703.
The memory 703 may be, but is not limited to, a read-only memory (ROM) or other type of static storage device that can store static information and instructions, a Random Access Memory (RAM) or other type of dynamic storage device that can store information and instructions, an electrically erasable programmable read-only memory (EEPROM), a compact disc read-only memory (CD-ROM) or other optical disk storage, optical disk storage (including compact disc, laser disc, optical disc, digital versatile disc, blu-ray disc, etc.), magnetic disk storage media or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. The memory may be self-contained and coupled to the processor via a bus. The memory may also be integral to the processor.
The memory 703 is used for storing application program codes for executing the present application, and is controlled by the processor 702. As shown in fig. 7, the image processing apparatus for a base station or a chip on a base station may further include a communication interface 701. The communication interface 701, the processor 702, and the memory 703 may be coupled to each other, for example, by a bus 704.
The communication interface 701 is used for information interaction with other devices, for example, supporting information interaction between an image processing apparatus used for a base station or a chip on the base station and other devices, for example, acquiring data from or transmitting data to other devices.
Referring to fig. 8, the present application provides an image processing apparatus for an edge cloud server or a chip on the edge cloud server, including: the acquisition module 81 is configured to acquire target information sent by a core network, where the target information is generated by adding sequence codes after image segmentation is performed on image information of a user acquired by a wireless gate on a gate channel, and the target information is sent to a base station by the wireless gate and then sent to the core network by the base station through a customized interface; a comparison module 82, configured to compare the target information acquired by the acquisition module 81 with face information in an image database, and acquire a comparison result.
Optionally, the comparison module 82 is specifically configured to extract a target feature value in the target information acquired by the acquisition module, where the target feature value is a target feature value of target face information obtained by performing face segmentation on the selected personal information closest to the gate by the wireless gate; the comparison module 82 is specifically configured to compare the target feature value with a feature value of face information in an image database, and obtain a comparison result.
Optionally, the obtaining module 81 is further configured to obtain an identity card photo when the mobile phone number of the user is registered to the network; a segmentation module 83, configured to segment the face portion of the identification card photo acquired by the acquisition module 81 to acquire target face information; an extracting module 84, configured to extract a target feature value of the target face information obtained by the segmenting module 83; a storage module 85, configured to store the target feature value extracted by the extraction module 84 in the image database after associating the target feature value with the mobile phone number of the user.
Optionally, the transmission module 86 is configured to transmit the content in the extensible ultra high definition database to the wireless gate if it is determined that the call instruction of the wireless gate is received, where the content in the extensible ultra high definition database is used for displaying on an ultra high definition screen by the wireless gate.
In the case of employing an integrated module, an image processing apparatus for an edge cloud server or a chip on an edge cloud server includes: the device comprises a storage unit, a processing unit and an interface unit. The processing unit is used for controlling and managing the action of the image processing device. And the interface unit is responsible for information interaction between the image processing device and other equipment. And a storage unit in charge of storing program codes and data of the image processing apparatus.
Wherein, the processing unit may be a processor, the storage unit may be a memory, and the interface unit may be a communication interface.
The image processing apparatus for the edge cloud server or the chip on the edge cloud server is shown in fig. 9, and includes a processor 902, where the processor 902 is configured to execute an application program code, so as to implement the image processing method described in the embodiment of the present application.
The processor 902 may be a general-purpose Central Processing Unit (CPU), a microprocessor, an application-specific integrated circuit (ASIC), or one or more ics for controlling the execution of programs in accordance with the teachings of the present disclosure.
As shown in fig. 9, the image processing apparatus for the edge cloud server or the chip on the edge cloud server may further include a memory 903.
The memory 903 may be a read-only memory (ROM) or other type of static storage device that can store static information and instructions, a Random Access Memory (RAM) or other type of dynamic storage device that can store information and instructions, an electrically erasable programmable read-only memory (EEPROM), a compact disc read-only memory (CD-ROM) or other optical disk storage, optical disk storage (including compact disc, laser disc, optical disc, digital versatile disc, blu-ray disc, etc.), magnetic disk storage media or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to these. The memory may be self-contained and coupled to the processor via a bus. The memory may also be integral to the processor.
The memory 903 is used for storing application program codes for executing the scheme of the application, and the processor 902 controls the execution. As shown in fig. 9, the image processing apparatus for the edge cloud server or the chip on the edge cloud server may further include a communication interface 901. The communication interface 901, the processor 902, and the memory 903 may be coupled to each other, for example, by a bus 904.
The communication interface 901 is used for information interaction with other devices, for example, supporting information interaction between an image processing apparatus used for an edge cloud server or a chip on the edge cloud server and other devices, for example, acquiring data from or sending data to other devices.
Further, there is also provided a computer storage medium (or medium) including instructions which, when executed, perform the operations of the image processing method in the above-described embodiments. Additionally, a computer program product is also provided, comprising the above-described computing storage medium (or media).
All relevant contents of each step related to the above method embodiment may be referred to the functional description of the corresponding functional module, and the function thereof is not described herein again.
It should be understood that, in various embodiments of the present invention, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation on the implementation process of the embodiments of the present invention.
Those of ordinary skill in the art would appreciate that the various illustrative modules, elements, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus, and method may be implemented in other ways. For example, the device embodiments described above are merely illustrative, e.g., multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (23)

1. An image processing method for a wireless gate is characterized in that,
acquiring image information of a user on a gate channel;
ranging the image information, and selecting target character information closest to a gate;
segmenting the face part of the target character information to obtain target face information;
adding a sequence code to the target face information to obtain target information;
and sending the target information to a base station, wherein a sequence code in the target information is used for indicating the base station to preferentially transmit the target information back to a core network through a customized interface, the core network sends the target information to an edge cloud server, and the edge cloud server compares the target information with face information in an image database to obtain a comparison result.
2. The image processing method according to claim 1, wherein the acquiring image information of the user on the gate passage comprises:
the mechanical ultra-high-definition camera is driven by the camera lifting module to move, scan and shoot image information of a user.
3. The image processing method according to claim 1, further comprising:
if it is determined that the user passing time on the gate channel exceeds a preset threshold value, sending a calling instruction to the edge cloud server, wherein the calling instruction is used for instructing the edge cloud server to transmit the content in the extensible ultra-high-definition database;
and acquiring the content in the extensible ultra-high-definition database transmitted by the edge cloud server, and displaying the content on an ultra-high-definition screen.
4. An image processing method for a base station, characterized in that,
receiving target information sent by a wireless gate, wherein the target information is generated by adding sequence codes to target face information obtained by image segmentation of user image information acquired on a gate channel by the wireless gate;
acquiring a sequence code in the target information, and determining a customized interface according to the sequence code;
and transmitting the target information to a core network through the customized interface so that the core network can send the target information to an edge cloud server, and the edge cloud server compares the target information with the face information in the image database to obtain a comparison result.
5. The image processing method of claim 4, wherein the transmitting the target information to a core network through the customized interface comprises:
and transmitting the target information to a core network through a first Virtual Local Area Network (VLAN) between the base station and the core network through the customized interface.
6. The image processing method according to claim 4, further comprising:
and receiving the content in the extensible ultra high definition database transmitted by the core network through a second VLAN, and sending the content in the extensible ultra high definition database to the wireless gate, wherein the core network receives the content in the extensible ultra high definition database sent by the edge server.
7. An image processing method for an edge cloud server, characterized in that,
acquiring target information sent by a core network, wherein the target information is generated by adding sequence codes after image segmentation is carried out on image information of a user acquired on a gate channel by a wireless gate, and the target information is sent to a base station by the wireless gate and then sent to the core network by the base station through a customized interface;
and comparing the target information with the face information in the image database to obtain a comparison result.
8. The image processing method according to claim 7, wherein the comparing the target information with the face information in the image database to obtain a comparison result comprises:
extracting a target characteristic value in the target information, wherein the target characteristic value is the target characteristic value of the target human face information obtained by carrying out human face segmentation on the selected character information closest to the gate by the wireless gate;
and comparing the target characteristic value with the characteristic value of the face information in the image database to obtain a comparison result.
9. The image processing method according to claim 7, wherein before comparing the target information with face information in an image database and obtaining a comparison result, the method further comprises:
acquiring an identity card photo when a mobile phone number of a user is registered to be accessed to a network;
segmenting the face part of the identity card photo to obtain target face information;
extracting a target characteristic value of the target face information;
and storing the target characteristic value and the mobile phone number of the user into the image database after associating the target characteristic value with the mobile phone number of the user.
10. The image processing method according to claim 7, further comprising:
and if the calling instruction of the wireless gate is determined to be received, transmitting the content in the extensible ultra-high-definition database to the wireless gate, wherein the content in the extensible ultra-high-definition database is used for displaying on an ultra-high-definition screen by the wireless gate.
11. An image processing apparatus for a wireless gate or a chip on a wireless gate, comprising:
the image acquisition module is used for acquiring image information of a user on the gate channel;
the image segmentation module is used for ranging the image information acquired by the image acquisition module and selecting the target person information closest to the gate;
the image segmentation module is also used for segmenting the face part of the target character information to obtain target face information;
the processing module is used for adding a sequence code to the target face information obtained by the image segmentation module to obtain target information;
and the sending module is used for sending the target information obtained by the processing module to a base station, wherein a sequence code in the target information is used for indicating that the base station preferentially transmits the target information back to a core network through a customized interface, the core network sends the target information to an edge cloud server, and the edge cloud server compares the target information with face information in an image database to obtain a comparison result.
12. The image processing apparatus according to claim 11,
the image acquisition module is specifically used for driving the mechanical ultra-high-definition camera to move, scan and shoot image information of a user through the camera lifting module.
13. The image processing apparatus according to claim 11,
the sending module is further configured to send a calling instruction to the edge cloud server if it is determined that no user passing time on the gate channel exceeds a predetermined threshold, where the calling instruction is used to instruct the edge cloud server to transmit content in an extensible ultra-high-definition database;
and the display module is used for acquiring the content in the extensible ultra-high-definition database transmitted by the edge cloud server and displaying the content on an ultra-high-definition screen.
14. An image processing apparatus for a base station or a chip on the base station, comprising:
the receiving module is used for receiving target information sent by the wireless gate, wherein the target information is generated by adding sequence codes to target face information obtained by image segmentation of user image information acquired on a gate channel by the wireless gate;
the determining module is used for acquiring the sequence code in the target information received by the receiving module and determining the customized interface according to the sequence code;
and the transmission module is used for transmitting the target information to a core network through the customized interface determined by the determination module so that the core network can send the target information to an edge cloud server, and the edge cloud server compares the target information with the face information in the image database to obtain a comparison result.
15. The image processing apparatus according to claim 14,
the transmission module is specifically configured to transmit the target information to a core network through the customized interface in a first virtual local area network VLAN between the base station and the core network.
16. The image processing apparatus according to claim 14,
the receiving module is further configured to receive content in an extensible ultra high definition database transmitted by the core network through a second VLAN, and send the content in the extensible ultra high definition database to the wireless gate, where the core network receives the content in the extensible ultra high definition database sent by the edge server.
17. An image processing apparatus for an edge cloud server or a chip on an edge cloud server, comprising:
the system comprises an acquisition module, a sequence code generation module and a processing module, wherein the acquisition module is used for acquiring target information sent by a core network, the target information is generated by adding the sequence code after image segmentation is carried out on image information of a user acquired on a gate channel by a wireless gate, and the target information is sent to a base station by the wireless gate and then sent to the core network by the base station through a customized interface;
and the comparison module is used for comparing the target information acquired by the acquisition module with the face information in the image database to acquire a comparison result.
18. The image processing apparatus according to claim 17,
the comparison module is specifically configured to extract a target feature value in the target information acquired by the acquisition module, where the target feature value is a target feature value of target face information obtained by performing face segmentation on the selected character information closest to the gate by the wireless gate;
the comparison module is specifically configured to compare the target feature value with a feature value of face information in an image database, and obtain a comparison result.
19. The image processing apparatus according to claim 17, further comprising:
the acquisition module is also used for acquiring an identity card photo when the mobile phone number of the user is registered and accessed to the network;
the segmentation module is used for segmenting the face part of the identity card photo acquired by the acquisition module to acquire target face information;
the extraction module is used for extracting the target characteristic value of the target face information obtained by the segmentation module;
and the storage module is used for storing the target characteristic value extracted by the extraction module and the mobile phone number of the user into the image database after being associated.
20. The image processing apparatus according to claim 17, further comprising:
and the transmission module is used for transmitting the content in the extensible ultra-high-definition database to the wireless gate if the calling instruction of the wireless gate is determined to be received, wherein the content in the extensible ultra-high-definition database is used for displaying on an ultra-high-definition screen by the wireless gate.
21. An image processing apparatus comprising a processor which, when the image processing apparatus is operating, executes computer-executable instructions to cause the image processing apparatus to perform the image processing method according to any one of claims 1 to 10.
22. A computer storage medium comprising instructions that, when executed on a computer, cause the computer to perform the image processing method of any one of claims 1-10.
23. A computer program product, characterized in that it comprises instruction code for executing the image processing method according to any one of claims 1 to 10.
CN201911054796.8A 2019-10-31 2019-10-31 Image processing method and device Pending CN110751767A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911054796.8A CN110751767A (en) 2019-10-31 2019-10-31 Image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911054796.8A CN110751767A (en) 2019-10-31 2019-10-31 Image processing method and device

Publications (1)

Publication Number Publication Date
CN110751767A true CN110751767A (en) 2020-02-04

Family

ID=69281597

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911054796.8A Pending CN110751767A (en) 2019-10-31 2019-10-31 Image processing method and device

Country Status (1)

Country Link
CN (1) CN110751767A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111780673A (en) * 2020-06-17 2020-10-16 杭州海康威视数字技术股份有限公司 Distance measurement method, device and equipment
CN112560775A (en) * 2020-12-25 2021-03-26 深圳市商汤科技有限公司 Switch control method and device, computer equipment and storage medium
CN112580553A (en) * 2020-12-25 2021-03-30 深圳市商汤科技有限公司 Switch control method, device, computer equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105517680A (en) * 2015-04-28 2016-04-20 北京旷视科技有限公司 Device, system and method for recognizing human face, and computer program product
CN107945321A (en) * 2017-11-08 2018-04-20 平安科技(深圳)有限公司 Safety inspection method, application server and computer-readable recording medium based on recognition of face
CN208225167U (en) * 2018-04-27 2018-12-11 中科源通科技(深圳)有限公司 Testimony of a witness ticket veritifies gate automatically
CN109314887A (en) * 2016-05-12 2019-02-05 康维达无线有限责任公司 It is connected to the mobile core network of virtualization
CN109948586A (en) * 2019-03-29 2019-06-28 北京三快在线科技有限公司 Method, apparatus, equipment and the storage medium of face verification

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105517680A (en) * 2015-04-28 2016-04-20 北京旷视科技有限公司 Device, system and method for recognizing human face, and computer program product
CN109314887A (en) * 2016-05-12 2019-02-05 康维达无线有限责任公司 It is connected to the mobile core network of virtualization
CN107945321A (en) * 2017-11-08 2018-04-20 平安科技(深圳)有限公司 Safety inspection method, application server and computer-readable recording medium based on recognition of face
CN208225167U (en) * 2018-04-27 2018-12-11 中科源通科技(深圳)有限公司 Testimony of a witness ticket veritifies gate automatically
CN109948586A (en) * 2019-03-29 2019-06-28 北京三快在线科技有限公司 Method, apparatus, equipment and the storage medium of face verification

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111780673A (en) * 2020-06-17 2020-10-16 杭州海康威视数字技术股份有限公司 Distance measurement method, device and equipment
CN112560775A (en) * 2020-12-25 2021-03-26 深圳市商汤科技有限公司 Switch control method and device, computer equipment and storage medium
CN112580553A (en) * 2020-12-25 2021-03-30 深圳市商汤科技有限公司 Switch control method, device, computer equipment and storage medium

Similar Documents

Publication Publication Date Title
CN110751767A (en) Image processing method and device
CN105894677A (en) Public bicycle renting system using two-dimensional code scanning
CN104700654A (en) Parking space reserving method and system
CN110738147B (en) Face recognition system and method for rail transit
CN114093038A (en) Parking payment method and device
CN108154103A (en) Detect method, apparatus, equipment and the computer storage media of promotion message conspicuousness
CN111260930A (en) Vehicle management system, device, method and computer system
CN111696241A (en) Scenic spot ticket checking and selling system and method based on face recognition
CN113361468A (en) Business quality inspection method, device, equipment and storage medium
KR20190110324A (en) System for collecting and providing the information of parking area
CN210166825U (en) Passenger service system of urban rail transit
CN112507314B (en) Client identity verification method, device, electronic equipment and storage medium
KR101915126B1 (en) management system for customer of vehicle maintenance using CRM service
CN114495364A (en) Self-service car renting method and device, electronic equipment and readable storage medium
US20200258318A1 (en) Information processing apparatus, information processing method, and program
CN106850300B (en) System and method for linkage intervention of operation services of highway toll station
CN108182419A (en) Vehicle-mounted recognition of face terminal, system, method, apparatus and storage medium
CN107453933A (en) A kind of service assembly platform and method
US20230169774A1 (en) Methods and apparatus for using wide area networks to support parking systems
CN110706488A (en) System and method for man-machine interaction of stereo garage
CN107067305A (en) Self-service borrow goes back umbrella method and system
CN105788056A (en) Event processing method and device
CN110223074A (en) A kind of traffic is deducted fees control method, apparatus and system
KR102064776B1 (en) Vehicle database management system for manless parking lot and method thereof
CN110751304A (en) Information interaction synchronization method and device for service provider

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200204