WO2019237743A1 - Image processing method, device, electronic device and computer readable storage medium - Google Patents

Image processing method, device, electronic device and computer readable storage medium Download PDF

Info

Publication number
WO2019237743A1
WO2019237743A1 PCT/CN2019/073069 CN2019073069W WO2019237743A1 WO 2019237743 A1 WO2019237743 A1 WO 2019237743A1 CN 2019073069 W CN2019073069 W CN 2019073069W WO 2019237743 A1 WO2019237743 A1 WO 2019237743A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
processing
processing method
original image
present disclosure
Prior art date
Application number
PCT/CN2019/073069
Other languages
French (fr)
Chinese (zh)
Inventor
庄幽文
赖锦锋
Original Assignee
北京微播视界科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京微播视界科技有限公司 filed Critical 北京微播视界科技有限公司
Publication of WO2019237743A1 publication Critical patent/WO2019237743A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • the present disclosure relates to the field of image processing, and in particular, to an image processing method, apparatus, electronic device, and computer-readable storage medium.
  • an embodiment of the present disclosure provides an image processing method to at least partially solve the foregoing problems.
  • an image processing apparatus, an electronic device, and a computer-readable storage medium are also provided.
  • An image processing method includes: receiving an original image; performing first processing on the original image,
  • the two processes are: (original image-first image * ⁇ ) / ⁇ , where 0 ⁇ ⁇ 1, 0 ⁇ ⁇ 1.
  • the value of ⁇ is associated with the value of ⁇ .
  • 1- ⁇ .
  • 1- ⁇ + c, where c is a constant, and 0 ⁇ c ⁇ 1.
  • the first processing is: performing blur processing on the original image.
  • the first processing is: dividing the original image into a plurality of image regions; obtaining / deleting one or more image regions in the original image to obtain an intermediate image; and blurring the intermediate image.
  • the blurring process is: calculating an average value according to the value of the current pixel point of the image and the values of neighboring pixel points around it, and using the average value as the value of the current pixel point.
  • the calculated average value is: calculating a smoothing matrix, and performing a convolution calculation on the value of the current pixel point of the image and the values of neighboring pixel points around the image with the smoothing matrix to obtain the average value.
  • the acquiring / deleting one or more image regions in the original image and obtaining the intermediate image includes: receiving a selection instruction, where the selection instruction is used to select one or more image regions in the image; The selected one or more image regions are used as intermediate images; or, the selected one or more image regions are deleted, and the remaining images are used as intermediate images.
  • An image processing device includes a receiving module for receiving an original image
  • a first processing module configured to perform first processing on the original image to obtain a first image
  • a second processing module configured to perform a second process on the original image to obtain a second image, where the second process is: (original image-first image * ⁇ ) / ⁇ , where 0 ⁇ ⁇ 1, 0 ⁇ ⁇ ⁇ 1.
  • the value of ⁇ is associated with the value of ⁇ .
  • 1- ⁇ .
  • 1- ⁇ + c, where c is a constant, and 0 ⁇ c ⁇ 1.
  • the first processing module includes: a first blur processing module, configured to perform blur processing on the original image.
  • the first processing module includes: a segmentation module for segmenting an original image into multiple image regions; and an intermediate processing module for acquiring / deleting one or more image regions in the original image to obtain an intermediate image
  • a second blur processing module configured to perform blur processing on the intermediate image.
  • the blurring process is: calculating an average value according to the value of the current pixel point of the image and the values of neighboring pixel points around it, and using the average value as the value of the current pixel point.
  • the calculated average value is: calculating a smoothing matrix, performing a convolution calculation on the value of the current pixel point of the image and the values of neighboring pixel points around the image with the smoothing matrix to obtain the average value.
  • the intermediate processing module includes: an intermediate image selection module for selecting one or more image regions in the image; using the selected one or more image regions as an intermediate image; or, selecting the selected image region One or more image regions are deleted, and the remaining images are used as intermediate images.
  • An electronic device includes:
  • At least one processor At least one processor
  • a memory connected in communication with the at least one processor; wherein,
  • the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor, so that the at least one processor can execute the image processing method according to the first aspect.
  • a non-transitory computer-readable storage medium wherein the non-transitory computer-readable storage medium stores computer instructions for causing a computer to execute the image processing method according to the first aspect.
  • Embodiments of the present disclosure provide an image processing method, apparatus, electronic device, and computer-readable storage medium.
  • the image processing method includes: receiving an original image; performing a first processing on the original image to obtain a first image; and performing a second processing on the original image to obtain a second image, wherein the second processing is: (original image-first One image * ⁇ ) / ⁇ , where 0 ⁇ ⁇ 1, 0 ⁇ ⁇ 1.
  • the embodiment of the present disclosure can adjust the result of image processing according to the coefficient ⁇ , so that users can obtain different processing effects according to different coefficients, which improves the flexibility of image processing.
  • FIG. 1a is a schematic flowchart of an image processing method according to an embodiment of the present disclosure.
  • FIG. 1b is a schematic flowchart of an image processing method according to another embodiment of the present disclosure.
  • FIG. 2 is a schematic diagram of a man-machine interface for selecting a first process according to an embodiment of the present disclosure.
  • FIG. 3 is a schematic flowchart of an image processing method according to another embodiment of the present disclosure.
  • FIG. 4a is a schematic structural diagram of an image processing apparatus according to an embodiment of the present disclosure.
  • Fig. 4b is a schematic structural view of an embodiment of a first processing module in the image processing apparatus of Fig. 4a.
  • FIG. 4c is a schematic structural diagram of another embodiment of a first processing module in the image processing apparatus of FIG. 4a.
  • FIG. 5 is a schematic structural diagram of an image processing hardware device according to an embodiment of the present disclosure.
  • FIG. 6 is a schematic structural diagram of a computer-readable storage medium according to an embodiment of the present disclosure.
  • FIG. 7 is a schematic structural diagram of an image processing terminal according to an embodiment of the present disclosure.
  • the image processing method mainly includes the following steps S1 to S3. among them:
  • Step S1 Receive the original image.
  • the original image may be an unprocessed video or picture collected from an image sensor.
  • the image sensor may be a camera, an ultrasonic sensor, or the like.
  • the original image may also be a video obtained from other sources.
  • Step S2 Perform a first process on the original image to obtain a first image.
  • the first processing may be any type of processing on an image.
  • the first processing may be blur processing, that is, extracting low-frequency components of an image; it may be segmentation processing, that is, image Divided into a number of different regions; it can be a compression process, which compresses the image to make it smaller.
  • the processing type of the first processing is configurable.
  • a human-machine interaction interface of the first process may be provided for the user to select one of the plurality of first processes as the first process of the current configuration; a programming interface may also be provided, and the user may write the processing steps of the first process by himself To provide maximum flexibility.
  • Step S3 Perform a second process on the original image to obtain a second image.
  • the second process is:
  • the second process includes several sub-steps:
  • the value of the coefficient ⁇ is associated with the value of the coefficient ⁇ , which satisfies a certain functional relationship.
  • the relationship can be set as needed.
  • a human-computer interaction interface for the coefficient can be provided for users to adjust ⁇ and ⁇ . Relationship.
  • 1- ⁇ + c, where c is a constant, and the existence of the constant c ensures that 1 / ⁇ does not become infinite.
  • the value of ⁇ can be dynamically adjusted.
  • the user can adjust the value of ⁇ through a coefficient configuration human-computer interaction interface.
  • the human-computer interaction interface receives a coefficient configuration instruction sent by the user, and
  • the coefficient configuration instruction configures the value of ⁇ ; specifically, the human-computer interaction interface may be a sliding control, such as a slider, and the original distance of the slider is the origin.
  • the distance the user drags the slider away from the origin is positively related to the value of ⁇ .
  • the initial angle of the knob is 0 °. When the user drags the knob to rotate, the greater the angle of rotation of the knob, the greater the value of ⁇ .
  • the first process is a processing set composed of multiple image sub-processes. As shown in FIG. 1b, the first process in step S2 includes:
  • S203 Perform blur processing on the intermediate image.
  • the first process is a processing set consisting of a division process, an acquisition / deletion process, and a blur process.
  • the image is first segmented.
  • the segmentation may be segmented according to a preset rule or manually divided by a user to define a segmented area or range. After segmentation, the segmented area that needs to be retained or deleted needs to be removed.
  • the acquisition or deletion may be performed according to a preset rule, or the user may manually select the division area to be acquired or deleted; finally, blur the acquired division area or the remaining division area after deletion.
  • the image is segmented according to a predetermined segmentation rule. For example, an image may be segmented into multiple image regions by key points on the image; when the acquisition process is performed, a selection instruction is received, the selection instruction It is used to select one or more image regions, and use the selected one or more image regions as an intermediate image.
  • a selection instruction is received, and the selection instruction is used to select one or more image regions to select the selected one or more image regions.
  • One or more image regions are deleted, and the remaining images are used as intermediate images; finally, the intermediate images are blurred to obtain a second image.
  • the processing set can be fixed or configurable. For example, if you need to process many images in batches using the same configuration, you can use a fixed processing set to prevent processing errors; you can also provide multiple Each processing set, and provide users with the processing effect of each processing set, for users to choose, to provide flexibility.
  • the processing flow in the processing set may also be fixed or configurable.
  • some typical processing sets may be set in advance to provide a human-computer interaction interface for the first processing for users.
  • the first processing includes a first set, a second set, a third set, and a fourth set for the user to select. You can choose any one set as the first process, or you can select several sets to be combined as the first process.
  • multiple first processes can also be provided in the human-machine interaction interface of the first process. Sub-processing.
  • the user can freely combine these sub-processing and specify the processing order between the sub-processing to form a customized first processing, such as segmentation processing, selection processing and blur processing in the first set shown in FIG. 2,
  • the user can add and delete one of the sub-processes, and can adjust the order between the sub-processes.
  • the user can exchange the selection process and the split process. Sequence, this time, the user needs to do to select the image division processing, and then the selected image segmentation processing, and the user may know in advance the processing flow of the custom effect obtained by the preview processing.
  • the configuration of the first process through the human-computer interaction interface is only one implementation manner, and those skilled in the art can configure the first process in any suitable manner; the configuration of the first process is not limited to the above.
  • the processing sets can also be combined to form the first processing, which is not limited in this application, and only indicates that the first processing can be preset or dynamically configured as required.
  • the present disclosure configures the first process according to the selection instruction issued by the user, thereby enabling the user to adjust the effect of the graphic processing according to his own needs, thereby improving the user experience effect.
  • a face image is taken as an example to describe a complete image processing embodiment:
  • Step S301 obtaining a face image
  • the face image may be a self-portrait image obtained by using an image sensor of a mobile terminal, such as a camera;
  • Step 302 locate key points of the face, and divide the face into a facial area and a facial feature area;
  • the key points on the face image are located; the key points are the key points of the facial contour and the key points of the facial features, so that the human face can be divided into facial features and facial regions
  • the division of the area here can be dynamically configured. The user can pre-configure the area to be divided according to the needs, or manually divide the area. In the case of manually dividing the area, there is no need to locate key points;
  • Step 303 Receive a selection instruction that selects the facial features area and / or the facial area;
  • the selection instruction selects the eyes and the nose; it can be understood that the selection instruction here may be to select an arbitrarily divided area.
  • step 304 the area selected by the selection instruction is deleted, and the remaining image is used as an intermediate image.
  • the deletion process is taken as an example. After the eyes and the nose are selected, the images of the eyes and the nose are deleted, and the remaining images are intermediate images.
  • Step 305 Blur the intermediate image to obtain a second image.
  • the blurring process is: calculating an average value according to the value of the current pixel point in the image and the values of neighboring pixels, and using the average value as the value of the current pixel point for all pixels in the image Point traversing the above operation, the result is the image after blur processing.
  • the process of calculating the average value in the above-mentioned blurring process is: calculating a smoothing matrix, performing convolution calculation on the value of the current pixel point of the image and the values of neighboring pixel points around the image with the smoothing matrix to obtain the average value.
  • the Gaussian distribution matrix obtained according to the Gaussian distribution formula is:
  • Convolution calculation is performed by using a matrix composed of pixel values of the image and the smoothing matrix to obtain the average value of the pixels.
  • the values in the smoothing matrix are called smoothing coefficients. Assume that one pixel in the above intermediate image and its neighbors The pixel values are shown in the following matrix:
  • the pixel value is 103, and the value after blurring is:
  • the above blur processing is performed on each pixel point in the intermediate image, and the obtained intermediate image after the blur is the first image obtained after the original image is subjected to the first processing.
  • the above calculation process can be further optimized.
  • the above-mentioned smoothing matrix is a 3 * 3 two-dimensional matrix.
  • Each pixel needs to calculate 9 multiplications and 8 additions, which is a large amount of calculation.
  • By transforming the above two-dimensional matrix into two one-dimensional 1 * 3 matrices each pixel only needs to perform 3 multiplications and 2 additions in the X and Y directions, respectively, and perform 6 multiplications and 4 totals. Addition is just a step closer.
  • Step 306 Perform a second process on the original face image to obtain a second image.
  • the first image is actually the low-frequency component of the intermediate image
  • original image-first image * 0.6 is the original image with the nose and eye parts retained, and the rest of the face image is reduced by 0.6 times the low-frequency component, which is retained High frequency components and a small number of low frequency components.
  • the user runs the image method using a mobile terminal with a camera and a touch screen.
  • the user's face image is obtained through the camera.
  • Degree of sliding control the screen automatically displays the face image after sharpening, and then the user can continue to drag the sliding control and view the sharpening effect in real time.
  • the image area to be processed is selected by a selection instruction, the degree of enhancement of the processing result is controlled by a sliding control, and the user can preview the processing result in real time, thereby improving the flexibility of image processing.
  • the following is a device embodiment of the present disclosure.
  • the device embodiment of the present disclosure can be used to perform the steps implemented by the method embodiments of the present disclosure.
  • Only parts related to the embodiments of the present disclosure are shown. Specific technical details are not disclosed. Reference is made to the method embodiments of the present disclosure.
  • an embodiment of the present disclosure provides an image processing apparatus.
  • the apparatus may perform the steps described in the above-mentioned embodiment of the image processing method.
  • the device includes a receiving module 41, a first processing module 42, and a second processing module 43.
  • a receiving module 41 configured to receive an original image
  • a first processing module 42 configured to perform a first process on an original image to obtain a first image
  • a second processing module 43 is configured to perform a second process on the original image to obtain a second image, where the second process is:
  • the original image may be an unprocessed video or picture collected from an image sensor.
  • the original image may also be a video or picture obtained from other sources, such as downloaded from a network server or from a web server. Readable from a mobile memory; in short, the original image is not limited to an image that has not undergone any processing, but refers to an image that has not been processed by the image processing method described in the embodiments of the present application.
  • the first processing may be any type of processing on an image.
  • the first processing may be blur processing, that is, extracting low-frequency components of an image; it may be segmentation processing, that is, image Divided into a number of different regions; it can be a compression process, which compresses the image to make it smaller.
  • the processing type of the first processing is configurable. For example, a human-machine interaction interface of the first processing may be provided for a user to select one of a plurality of first processings as the currently configured first processing.
  • the first processing module 42 includes: a first blur processing module 421, configured to perform blur processing on the original image
  • the blurring process is: calculating an average value according to the value of the current pixel point in the image and the values of neighboring pixels, and using the average value as the value of the current pixel point for all pixels in the image Point traversing the above operation, the result obtained is the image after blur processing; in this embodiment, the process of calculating the average value in the above blur processing is: calculating a smoothing matrix, comparing the current pixel value of the image and its neighboring pixels Convolution calculations are performed on the values of the points and the smoothing matrix to get the average.
  • the value of the coefficient ⁇ is associated with the value of the coefficient ⁇ , which satisfies a certain functional relationship.
  • the relationship can be set as needed.
  • a human-computer interaction interface for the coefficient can be provided for users to adjust ⁇ and ⁇ . Relationship.
  • the value of ⁇ can be dynamically adjusted.
  • the user can adjust the value of ⁇ through a coefficient configuration human-computer interaction interface.
  • the human-computer interaction interface receives a coefficient configuration instruction sent by the user, and
  • the coefficient configuration instruction configures the value of ⁇ ; specifically, the human-computer interaction interface may be a sliding control, such as a slider, and the original distance of the slider is the origin.
  • the distance the user drags the slider away from the origin is positively related to the value of ⁇ .
  • the initial angle of the knob is 0 °. When the user drags the knob to rotate, the greater the angle of rotation of the knob, the greater the value of ⁇ .
  • the first processing module 42 includes a segmentation module 422 for segmenting the original image into multiple image regions, and an intermediate processing module 423 for acquiring / deleting one or more images in the original image. Area to obtain an intermediate image; a second blur processing module 424 is configured to perform blur processing on the intermediate image.
  • the intermediate processing module 423 includes: an intermediate image selection module 4231 for selecting one or more image regions in the image; and using the selected one or more image regions as an intermediate image; or , Delete one or more selected image regions, and use the remaining images as intermediate images.
  • the original image may be a face image; one or more parts of the image are faces and features in the face image.
  • the first process is a processing set consisting of a division process, an acquisition / deletion process, and a blur process.
  • the processing flow in the processing set may be fixed or configurable.
  • some typical processing sets may be set in advance to provide a human-computer interaction interface for the first processing, for users to download from Select one processing set from multiple processing sets as the first process of the current configuration.
  • You can also provide multiple sub-processes of the first process in the human-computer interaction interface of the first process. Users can freely combine these sub-processes and specify sub-processes. Processing order between processes to form a custom first process.
  • the present disclosure configures the first process according to the selection instruction issued by the user, thereby enabling the user to adjust the effect of the graphic processing according to his own needs, thereby improving the user experience effect.
  • FIG. 5 is a hardware block diagram illustrating an electronic device according to an embodiment of the present disclosure. As shown in FIG. 5, the electronic device 50 according to an embodiment of the present disclosure includes a memory 51 and a processor 52.
  • the memory 51 is configured to store non-transitory computer-readable instructions.
  • the memory 51 may include one or more computer program products, which may include various forms of computer-readable storage media, such as volatile memory and / or non-volatile memory.
  • the volatile memory may include, for example, a random access memory (RAM) and / or a cache memory.
  • the non-volatile memory may include, for example, a read-only memory (ROM), a hard disk, a flash memory, and the like.
  • the processor 52 may be a central processing unit (CPU) or other form of processing unit having data processing capabilities and / or instruction execution capabilities, and may control other components in the electronic device 50 to perform desired functions.
  • the processor 52 is configured to run the computer-readable instructions stored in the memory 51 so that the electronic device 50 executes all or part of the steps of the image processing method of the foregoing embodiments of the present disclosure. .
  • this embodiment may also include well-known structures such as a communication bus and an interface. These well-known structures should also be included in the protection scope of the present disclosure within.
  • FIG. 6 is a schematic diagram illustrating a computer-readable storage medium according to an embodiment of the present disclosure.
  • a computer-readable storage medium 60 according to an embodiment of the present disclosure has non-transitory computer-readable instructions 61 stored thereon.
  • the non-transitory computer-readable instruction 61 is executed by a processor, all or part of the steps of the image processing method of the foregoing embodiments of the present disclosure are performed.
  • the computer-readable storage medium 60 includes, but is not limited to, optical storage media (for example, CD-ROM and DVD), magneto-optical storage media (for example, MO), magnetic storage media (for example, magnetic tape or mobile hard disk), Non-volatile memory rewritable media (for example: memory card) and media with built-in ROM (for example: ROM box).
  • optical storage media for example, CD-ROM and DVD
  • magneto-optical storage media for example, MO
  • magnetic storage media for example, magnetic tape or mobile hard disk
  • Non-volatile memory rewritable media for example: memory card
  • media with built-in ROM for example: ROM box
  • FIG. 7 is a schematic diagram illustrating a hardware structure of a terminal device according to an embodiment of the present disclosure. As shown in FIG. 7, the image processing terminal 70 includes the foregoing image processing apparatus embodiment.
  • the terminal device may be implemented in various forms, and the terminal device in the present disclosure may include, but is not limited to, such as a mobile phone, a smart phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), Mobile terminal devices such as PMPs (portable multimedia players), navigation devices, vehicle-mounted terminal devices, vehicle-mounted display terminals, vehicle-mounted electronic rear-view mirrors, and the like, and fixed terminal devices such as digital TVs, desktop computers, and the like.
  • PMPs portable multimedia players
  • navigation devices such as PMPs (portable multimedia players), navigation devices, vehicle-mounted terminal devices, vehicle-mounted display terminals, vehicle-mounted electronic rear-view mirrors, and the like
  • fixed terminal devices such as digital TVs, desktop computers, and the like.
  • the terminal may further include other components.
  • the image processing terminal 70 may include a power supply unit 71, a wireless communication unit 72, an A / V (audio / video) input unit 73, a user input unit 74, a sensing unit 75, an interface unit 76, and a controller. 77, an output unit 78, a storage unit 79, and so on.
  • FIG. 7 shows a terminal having various components, but it should be understood that it is not required to implement all the illustrated components, and more or fewer components may be implemented instead.
  • the wireless communication unit 72 allows radio communication between the terminal 70 and a wireless communication system or network.
  • the A / V input unit 73 is used to receive audio or video signals.
  • the user input unit 74 may generate key input data according to a command input by the user to control various operations of the terminal device.
  • the sensing unit 75 detects the current state of the terminal 70, the position of the terminal 70, the presence or absence of a user's touch input to the terminal 70, the orientation of the terminal 70, the acceleration or deceleration movement and direction of the terminal 70, and the like, and generates a terminal for controlling the terminal Command or signal of operation of 70.
  • the interface unit 76 functions as an interface through which at least one external device can connect with the terminal 70.
  • the output unit 78 is configured to provide an output signal in a visual, audio, and / or tactile manner.
  • the storage unit 79 may store software programs and the like for processing and control operations performed by the controller 77, or may temporarily store data that has been output or is to be output.
  • the storage unit 79 may include at least one type of storage medium.
  • the terminal 70 can cooperate with a network storage device that performs a storage function of the storage unit 79 through a network connection.
  • the controller 77 generally controls the overall operation of the terminal device.
  • the controller 77 may include a multimedia module for reproducing or playing back multimedia data.
  • the controller 77 may perform a pattern recognition process to recognize a handwriting input or a picture drawing input performed on the touch screen as a character or an image.
  • the power supply unit 71 receives external power or internal power under the control of the controller 77 and provides appropriate power required to operate each element and component.
  • Various embodiments of the image processing method proposed by the present disclosure may be implemented using a computer-readable medium such as computer software, hardware, or any combination thereof.
  • various embodiments of the image processing method proposed by the present disclosure can be implemented by using an application specific integrated circuit (ASIC), a digital signal processor (DSP), a digital signal processing device (DSPD), and a programmable logic device (PLD).
  • ASIC application specific integrated circuit
  • DSP digital signal processor
  • DSPD digital signal processing device
  • PLD programmable logic device
  • FPGA Field programmable gate array
  • processor controller, microcontroller, microprocessor, electronic unit designed to perform the functions described herein, and in some cases, the present disclosure
  • Various embodiments of the proposed comparison method of video features may be implemented in the controller 77.
  • various embodiments of the image processing method proposed by the present disclosure may be implemented with a separate software module that allows execution of at least one function or operation.
  • the software codes may be implemented by a software application program (or program) written in any suitable programming language, and the software codes may be stored in the storage unit 79 and executed by the controller 77.
  • an "or” used in an enumeration of items beginning with “at least one” indicates a separate enumeration such that, for example, an "at least one of A, B, or C” enumeration means A or B or C, or AB or AC or BC, or ABC (ie A and B and C).
  • the word "exemplary” does not mean that the described example is preferred or better than other examples.
  • each component or each step can be disassembled and / or recombined.
  • These decompositions and / or recombinations should be considered as equivalent solutions of the present disclosure.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

Disclosed in embodiments of the present disclosure are an image processing method and device, an electronic device and a computer readable storage medium. Wherein, the image processing method comprises: receiving an original image; performing first processing on the original image to obtain a first image; performing second processing on the original image to obtain a second image, wherein the second processing is: (the original image-the first image*α)/β, wherein 0<α<1, 0<β<1. The technical solution in the embodiments can adjust an image processing result according to a coefficient, so that a user can obtain different processing effects according to different coefficients, improving image processing flexibility.

Description

图像处理方法、装置、电子设备和计算机可读存储介质Image processing method, device, electronic device and computer-readable storage medium
交叉引用cross reference
本公开引用于2018年06月13日递交的名称为“图像处理方法、装置、电子设备和计算机可读存储介质”的、申请号为201810609995.X的中国专利申请,其通过引用被全部并入本申请。The present disclosure refers to a Chinese patent application with the application number 201810609995.X entitled "Image Processing Method, Apparatus, Electronic Equipment, and Computer-readable Storage Medium" filed on June 13, 2018, which is incorporated by reference in its entirety. This application.
技术领域Technical field
本公开涉及图像处理领域,特别是涉及一种图像处理方法、装置、电子设备和计算机可读存储介质。The present disclosure relates to the field of image processing, and in particular, to an image processing method, apparatus, electronic device, and computer-readable storage medium.
背景技术Background technique
随着计算机技术的飞速发展,数字图像处理的应用也越来越广泛,从指纹、条码、医疗到人工智能、安防、军工都离不开图像处理。在我们的生活中所见最多的是车辆监控***,凡是有摄像头就有数字图像处理。有时摄像头所采集到的数据并不能直接使用,我们需要对采集的图像进行一系列的处理,以便使用者能更方便、更清楚地查看图像信息。With the rapid development of computer technology, the application of digital image processing is becoming more and more widespread, from fingerprints, barcodes, medical treatment to artificial intelligence, security, and military industries are inseparable from image processing. The most commonly seen in our lives is the vehicle monitoring system, where there is a camera, there is digital image processing. Sometimes the data collected by the camera cannot be used directly. We need to perform a series of processing on the collected images so that users can more conveniently and clearly view the image information.
发明内容Summary of the Invention
在实际的图像处理中,发明人发现很多不灵活的地方,比如现有技术中常常使用拉普拉斯算法对图像进行锐化,但是其仅仅是使用拉普拉斯算法直接对图像呈现锐化效果,无法方便的对锐化效果进行调节。In actual image processing, the inventor found many inflexible places. For example, in the prior art, the Laplace algorithm is often used to sharpen the image, but it is only using the Laplace algorithm to sharpen the image directly. Effect, you cannot easily adjust the sharpening effect.
针对现有技术中的无法灵活调节处理效果的问题,本公开实施例提供了一种图像处理方法,以至少部分地解决上述问题。此外,还提供一种图像处理装置、电子设备和计算机可读存储介质。Aiming at the problem that the processing effect cannot be flexibly adjusted in the prior art, an embodiment of the present disclosure provides an image processing method to at least partially solve the foregoing problems. In addition, an image processing apparatus, an electronic device, and a computer-readable storage medium are also provided.
为了实现上述目的,根据本公开的第一方面,提供以下技术方案:To achieve the above object, according to a first aspect of the present disclosure, the following technical solutions are provided:
一种图像处理方法,包括:接收原始图像;对原始图像进行第一处理,An image processing method includes: receiving an original image; performing first processing on the original image,
得到第一图像;对原始图像进行第二处理,得到第二图像,其中所述第Obtaining a first image; performing second processing on the original image to obtain a second image, wherein the first
二处理为:(原始图像-第一图像*α)/β,其中0<α<1,0<β<1。The two processes are: (original image-first image * α) / β, where 0 <α <1, 0 <β <1.
可选的,所述β的值与α的值相关联。Optionally, the value of β is associated with the value of α.
可选的,所述β=1-α。Optionally, β = 1-α.
可选的,所述β=1-α+c,其中c为常量,且0<c<1。Optionally, β = 1-α + c, where c is a constant, and 0 <c <1.
可选的,所述第一处理为:对原始图像进行模糊处理。Optionally, the first processing is: performing blur processing on the original image.
可选的,所述第一处理为:将原始图像分割为多个图像区域;获取/删除原始图像中的一个或多个图像区域,得到中间图像;对所述中间图像进行模糊处理。Optionally, the first processing is: dividing the original image into a plurality of image regions; obtaining / deleting one or more image regions in the original image to obtain an intermediate image; and blurring the intermediate image.
可选的,所述模糊处理为:根据图像当前像素点的值与其周围相邻像素点的值计算平均值,将所述平均值作为当前像素点的值。Optionally, the blurring process is: calculating an average value according to the value of the current pixel point of the image and the values of neighboring pixel points around it, and using the average value as the value of the current pixel point.
可选的,所述计算平均值为:计算平滑矩阵,将图像当前像素点的值和其周围相邻像素点的值与平滑矩阵做卷积计算,得到平均值。Optionally, the calculated average value is: calculating a smoothing matrix, and performing a convolution calculation on the value of the current pixel point of the image and the values of neighboring pixel points around the image with the smoothing matrix to obtain the average value.
可选的,所述获取/删除原始图像中的一个或多个图像区域,得到中间图像包括:接收选择指令,所述选择指令用于选择所述图像中的一个或多个图像区域;将所选择的一个或多个图像区域作为中间图像;或者,将所选择的一个或多个图像区域删除,剩余的图像作为中间图像。Optionally, the acquiring / deleting one or more image regions in the original image and obtaining the intermediate image includes: receiving a selection instruction, where the selection instruction is used to select one or more image regions in the image; The selected one or more image regions are used as intermediate images; or, the selected one or more image regions are deleted, and the remaining images are used as intermediate images.
为了实现上述目的,根据本公开的第二方面,还提供以下技术方案:To achieve the above object, according to a second aspect of the present disclosure, the following technical solutions are also provided:
一种图像处理装置,包括:接收模块,用于接收原始图像;An image processing device includes a receiving module for receiving an original image;
第一处理模块,用于对原始图像进行第一处理,得到第一图像;A first processing module, configured to perform first processing on the original image to obtain a first image;
第二处理模块,用于对原始图像进行第二处理,得到第二图像,其中所述第二处理为:(原始图像-第一图像*α)/β,其中0<α<1,0<β<1。A second processing module, configured to perform a second process on the original image to obtain a second image, where the second process is: (original image-first image * α) / β, where 0 <α <1, 0 < β <1.
可选的,所述β的值与α的值相关联。Optionally, the value of β is associated with the value of α.
可选的,所述β=1-α。Optionally, β = 1-α.
可选的,所述β=1-α+c,其中c为常量,且0<c<1。Optionally, β = 1-α + c, where c is a constant, and 0 <c <1.
可选的,所述第一处理模块包括:第一模糊处理模块,用于对原始图像进行模糊处理。Optionally, the first processing module includes: a first blur processing module, configured to perform blur processing on the original image.
可选的,所述第一处理模块包括:分割模块,用于将原始图像分割为多个图像区域;中间处理模块,用于获取/删除原始图像中的一个或多个图像区域,得到中间图像;第二模糊处理模块,用于对所述中间图像进行模糊处理。Optionally, the first processing module includes: a segmentation module for segmenting an original image into multiple image regions; and an intermediate processing module for acquiring / deleting one or more image regions in the original image to obtain an intermediate image A second blur processing module, configured to perform blur processing on the intermediate image.
可选的,所述模糊处理为:根据图像当前像素点的值与其周围相邻像素点的值计算平均值,将所述平均值作为当前像素点的值。Optionally, the blurring process is: calculating an average value according to the value of the current pixel point of the image and the values of neighboring pixel points around it, and using the average value as the value of the current pixel point.
可选的,所述计算平均值为:计算平滑矩阵,将图像当前像素点的值和其周围相邻像素点的值与平滑矩阵做卷积计算,得到平均值。Optionally, the calculated average value is: calculating a smoothing matrix, performing a convolution calculation on the value of the current pixel point of the image and the values of neighboring pixel points around the image with the smoothing matrix to obtain the average value.
可选的,所述中间处理模块包括:中间图像选择模块,用于选择所述图像中的一个或多个图像区域;将所选择的一个或多个图像区域作为中间图像;或者,将所选择的一个或多个图像区域删除,剩余的图像作为中间图像。Optionally, the intermediate processing module includes: an intermediate image selection module for selecting one or more image regions in the image; using the selected one or more image regions as an intermediate image; or, selecting the selected image region One or more image regions are deleted, and the remaining images are used as intermediate images.
为了实现上述目的,根据本公开的第三方面,还提供以下技术方案:To achieve the above object, according to the third aspect of the present disclosure, the following technical solutions are also provided:
一种电子设备,包括:An electronic device includes:
至少一个处理器;以及,At least one processor; and
与所述至少一个处理器通信连接的存储器;其中,A memory connected in communication with the at least one processor; wherein,
所述存储器存储有能被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行上述第一方面所述的图像处理方法。The memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor, so that the at least one processor can execute the image processing method according to the first aspect.
为了实现上述目的,根据本公开的第四方面,还提供以下技术方案:To achieve the above object, according to a fourth aspect of the present disclosure, the following technical solutions are also provided:
一种非暂态计算机可读存储介质,其中该非暂态计算机可读存储介质存储计算机指令,该计算机指令用于使计算机执行上述第一方面所述的图像处理方法。A non-transitory computer-readable storage medium, wherein the non-transitory computer-readable storage medium stores computer instructions for causing a computer to execute the image processing method according to the first aspect.
本公开实施例提供一种图像处理方法、装置、电子设备和计算机可读存储介质。其中该图像处理方法包括:接收原始图像;对原始图像进行第一处理,得到第一图像;对原始图像进行第二处理,得到第二图像,其中所述第二处理为:(原始图像-第一图像*α)/β,其中0<α<1,0<β<1。本公开实施例通过采取该技术方案,可以根据系数β来调节图像处理的结果,由此用户可以根据不同的系数得到不同的处理效果,提高了图像处理的灵活性。Embodiments of the present disclosure provide an image processing method, apparatus, electronic device, and computer-readable storage medium. The image processing method includes: receiving an original image; performing a first processing on the original image to obtain a first image; and performing a second processing on the original image to obtain a second image, wherein the second processing is: (original image-first One image * α) / β, where 0 <α <1, 0 <β <1. By adopting this technical solution, the embodiment of the present disclosure can adjust the result of image processing according to the coefficient β, so that users can obtain different processing effects according to different coefficients, which improves the flexibility of image processing.
上述说明仅是本公开技术方案的概述,为了能更清楚了解本公开的技术手段,而可依照说明书的内容予以实施,并且为让本公开的上述和其他目的、特征和优点能够更明显易懂,以下特举较佳实施例,并配合附图,详细说明如下。The above description is only an overview of the technical solutions of the present disclosure. In order to better understand the technical means of the present disclosure, it can be implemented in accordance with the contents of the description, and in order to make the above and other objects, features, and advantages of the present disclosure more understandable. The following describes the preferred embodiments in detail with reference to the drawings, as follows.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
图1a为根据本公开一个实施例的图像处理方法的流程示意图。FIG. 1a is a schematic flowchart of an image processing method according to an embodiment of the present disclosure.
图1b为根据本公开另一个实施例的图像处理方法的流程示意图。FIG. 1b is a schematic flowchart of an image processing method according to another embodiment of the present disclosure.
图2为根据本公开一个实施例的、用于选择第一处理的人机界面示意图。FIG. 2 is a schematic diagram of a man-machine interface for selecting a first process according to an embodiment of the present disclosure.
图3为根据本公开另一个实施例的图像处理方法的流程示意图。FIG. 3 is a schematic flowchart of an image processing method according to another embodiment of the present disclosure.
图4a为根据本公开一个实施例的图像处理装置的结构示意图。FIG. 4a is a schematic structural diagram of an image processing apparatus according to an embodiment of the present disclosure.
图4b为图4a的图像处理装置中的第一处理模块一个实施例的结构示 意图。Fig. 4b is a schematic structural view of an embodiment of a first processing module in the image processing apparatus of Fig. 4a.
图4c为图4a的图像处理装置中的第一处理模块另一个实施例的结构示意图。FIG. 4c is a schematic structural diagram of another embodiment of a first processing module in the image processing apparatus of FIG. 4a.
图5为根据本公开一个实施例的图像处理硬件装置的结构示意图。FIG. 5 is a schematic structural diagram of an image processing hardware device according to an embodiment of the present disclosure.
图6为根据本公开一个实施例的计算机可读存储介质的结构示意图。FIG. 6 is a schematic structural diagram of a computer-readable storage medium according to an embodiment of the present disclosure.
图7为根据本公开一个实施例的图像处理终端的结构示意图。FIG. 7 is a schematic structural diagram of an image processing terminal according to an embodiment of the present disclosure.
具体实施方式detailed description
以下通过特定的具体实例说明本公开的实施方式,本领域技术人员可由本说明书所揭露的内容轻易地了解本公开的其他优点与功效。显然,所描述的实施例仅仅是本公开一部分实施例,而不是全部的实施例。本公开还可以通过另外不同的具体实施方式加以实施或应用,本说明书中的各项细节也可以基于不同观点与应用,在没有背离本公开的精神下进行各种修饰或改变。需说明的是,在不冲突的情况下,以下实施例及实施例中的特征可以相互组合。基于本公开中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本公开保护的范围。The following describes the implementation of the present disclosure through specific specific examples. Those skilled in the art can easily understand other advantages and effects of the present disclosure from the content disclosed in this specification. Obviously, the described embodiments are only a part of the embodiments of the present disclosure, but not all the embodiments. The present disclosure can also be implemented or applied through other different specific implementations, and various details in this specification can also be modified or changed based on different viewpoints and applications without departing from the spirit of the present disclosure. It should be noted that, in the case of no conflict, the following embodiments and features in the embodiments can be combined with each other. Based on the embodiments in the present disclosure, all other embodiments obtained by a person of ordinary skill in the art without creative efforts shall fall within the protection scope of the present disclosure.
需要说明的是,下文描述在所附权利要求书的范围内的实施例的各种方面。应显而易见,本文中所描述的方面可体现于广泛多种形式中,且本文中所描述的任何特定结构及/或功能仅为说明性的。基于本公开,所属领域的技术人员应了解,本文中所描述的一个方面可与任何其它方面独立地实施,且可以各种方式组合这些方面中的两者或两者以上。举例来说,可使用本文中所阐述的任何数目个方面来实施设备及/或实践方法。另外,可使用除了本文中所阐述的方面中的一或多者之外的其它结构及/或功能性实施此设备及/或实践此方法。It should be noted that various aspects of the embodiments within the scope of the appended claims are described below. It should be apparent that the aspects described herein may be embodied in a wide variety of forms and that any specific structure and / or function described herein is merely illustrative. Based on the present disclosure, those skilled in the art should understand that one aspect described herein may be implemented independently of any other aspect, and that two or more of these aspects may be combined in various ways. For example, any number of the aspects set forth herein may be used to implement a device and / or a practice method. In addition, the apparatus and / or the method may be implemented using other structures and / or functionality in addition to one or more of the aspects set forth herein.
还需要说明的是,以下实施例中所提供的图示仅以示意方式说明本公开的基本构想,图式中仅显示与本公开中有关的组件而非按照实际实施时的组件数目、形状及尺寸绘制,其实际实施时各组件的型态、数量及比例可为一种随意的改变,且其组件布局型态也可能更为复杂。It should also be noted that the illustrations provided in the following embodiments only illustrate the basic idea of the present disclosure in a schematic manner, and the drawings only show the components related to the present disclosure and not the number, shape and For size drawing, the type, quantity, and proportion of each component can be changed at will in actual implementation, and the component layout type may be more complicated.
另外,在以下描述中,提供具体细节是为了便于透彻理解实例。然而,所属领域的技术人员将理解,可在没有这些特定细节的情况下实践所述方面。In addition, in the following description, specific details are provided to facilitate a thorough understanding of the examples. However, those skilled in the art will understand that the described aspects may be practiced without these specific details.
为了解决如何对图像的进行增强的技术问题,本公开实施例提供一种图像处理方法。如图1a所示,该图像处理方法主要包括如下步骤S1至步 骤S3。其中:In order to solve the technical problem of how to enhance an image, an embodiment of the present disclosure provides an image processing method. As shown in FIG. 1a, the image processing method mainly includes the following steps S1 to S3. among them:
步骤S1:接收原始图像。Step S1: Receive the original image.
其中,所述原始图像可以是从图像传感器中采集到的未经任何处理的视频或者图片等,所述图像传感器可以是摄像头、超声波传感器等等;所述原始图像也可以从其他途径得到的视频或图片,如从网络服务器中下载得到或者从可移动存储器中读取得到;总之,原始图像并不限定为未经过任何处理的图像,而是指未经本申请实施例所述的图像处理方法处理过的图像。The original image may be an unprocessed video or picture collected from an image sensor. The image sensor may be a camera, an ultrasonic sensor, or the like. The original image may also be a video obtained from other sources. Or pictures, such as downloaded from a web server or read from a removable storage; in short, the original image is not limited to an image that has not undergone any processing, but refers to an image processing method that has not been described in the embodiments of this application Processed image.
步骤S2:对原始图像进行第一处理,得到第一图像。Step S2: Perform a first process on the original image to obtain a first image.
在一个实施例中,所述第一处理可以是任一种对图像的处理类型,举例来说,所述第一处理可以是模糊处理,即提取图像的低频分量;可以是分割处理,即将图像分割成多个不同的区域;可以是压缩处理,即对图像进行压缩,使其体积变小。In one embodiment, the first processing may be any type of processing on an image. For example, the first processing may be blur processing, that is, extracting low-frequency components of an image; it may be segmentation processing, that is, image Divided into a number of different regions; it can be a compression process, which compresses the image to make it smaller.
在一个实施例中,所述第一处理的处理类型是可配置的。举例来说,可提供第一处理的人机交互界面,供用户从多个第一处理中选择一个作为当前配置的第一处理;也可以提供编程接口,用户可以自己编写第一处理的处理步骤,以提供最大的灵活度。In one embodiment, the processing type of the first processing is configurable. For example, a human-machine interaction interface of the first process may be provided for the user to select one of the plurality of first processes as the first process of the current configuration; a programming interface may also be provided, and the user may write the processing steps of the first process by himself To provide maximum flexibility.
步骤S3:对原始图像进行第二处理,得到第二图像,所述第二处理为:Step S3: Perform a second process on the original image to obtain a second image. The second process is:
(原始图像-第一图像*α)/β,其中0<α<1,0<β<1。(Original image-first image * α) / β, where 0 <α <1, 0 <β <1.
如上述公式所示,所述第二处理包括几个子步骤:As shown in the above formula, the second process includes several sub-steps:
S301,将S2步骤中经过第一处理之后得到的第一图像乘以一个系数α,其中α大于0小于1;S301. Multiply the first image obtained after the first processing in step S2 by a coefficient α, where α is greater than 0 and less than 1.
S302,将原始图像减去第一图像与α的乘积,一般来说,图像为一个矢量矩阵,两个图像相减实质上是两个矢量矩阵相减;S302. Subtract the product of the first image and α from the original image. Generally, the image is a vector matrix. Subtracting two images is essentially subtracting two vector matrices.
S303,将步骤S302中相减得到的结果除以一个系数β,所述β为增强系数,其取值大于0小于1,其作用是将步骤S302中相减的结果放大1/β倍,用来强化图像处理的结果。S303. Divide the result obtained by subtraction in step S302 by a coefficient β, where β is an enhancement coefficient, and its value is greater than 0 and less than 1. Its function is to magnify the result of subtraction in step S302 by 1 / β, and use To enhance the results of image processing.
在一个实施例中,所述系数β的值与系数α的值相关联,其满足一定的函数关系,该关系可以根据需要来设定,可以提供系数人机交互界面,供用户调节α和β的关系。In one embodiment, the value of the coefficient β is associated with the value of the coefficient α, which satisfies a certain functional relationship. The relationship can be set as needed. A human-computer interaction interface for the coefficient can be provided for users to adjust α and β. Relationship.
在一个实施例中,上述函数关系为:β=1-α。随着α的增大,β不断减小,1/β不断变大。其含义为:当第一图像被减去的越多,处理结果越需 要强化的多一些,以突出处理结果,当第一图像被减去的越少,处理结果越需要强化的少一些,因为此时相当于对原始图像做出的修改并不多。In one embodiment, the above functional relationship is: β = 1-α. As α increases, β decreases, and 1 / β becomes larger. The meaning is: when the first image is subtracted more, the processing result needs to be strengthened more to highlight the processing result. When the first image is subtracted less, the processing result needs to be strengthened less, because This is equivalent to not much modification to the original image.
在一个实施例中,由于0<α<1,因此当α无限接近于1时,1/β将变得很大,容易导致错误。因此,优选的,β=1-α+c,其中c为常数,常数c的存在,保证1/β不会变成无穷大。In one embodiment, since 0 <α <1, when α is infinitely close to 1, 1 / β will become very large, which may easily lead to errors. Therefore, it is preferable that β = 1-α + c, where c is a constant, and the existence of the constant c ensures that 1 / β does not become infinite.
在一个实施例中,所述α的值可以动态调整,举例来说,用户可以通过系数配置人机交互界面对α值进行调整,人机交互界面接收到用户发送的系数配置指令,并根据该系数配置指令配置α的值;具体的,所述人机交互界面可以是滑动控件,比如滑块,滑块的原始距离为原点,用户拖动滑块离开原点的距离与α的值正相关,滑块离原点越远,α的值越大;或者滑动控件可以是旋钮,旋钮的初始角度为0°,用户拖动旋钮旋转,旋钮的旋转的角度越大,α的值越大。In one embodiment, the value of α can be dynamically adjusted. For example, the user can adjust the value of α through a coefficient configuration human-computer interaction interface. The human-computer interaction interface receives a coefficient configuration instruction sent by the user, and The coefficient configuration instruction configures the value of α; specifically, the human-computer interaction interface may be a sliding control, such as a slider, and the original distance of the slider is the origin. The distance the user drags the slider away from the origin is positively related to the value of α. The farther the slider is from the origin, the greater the value of α; or the sliding control can be a knob. The initial angle of the knob is 0 °. When the user drags the knob to rotate, the greater the angle of rotation of the knob, the greater the value of α.
上述针对系数所进行配置的方式并非穷举,本领域技术人员在上述所列方式的基础上还可以进行简单变换(例如,排列、组合)或等同替换,这些也应包含在本公开的保护范围之内。The above methods for configuring the coefficients are not exhaustive. Those skilled in the art can also perform simple transformations (for example, permutations, combinations) or equivalent replacements on the basis of the methods listed above, which should also be included in the protection scope of the present disclosure. within.
本实施例通过采取上述技术方案,可以根据系数,对待处理图像进行相应的处理,由此可以根据不同的系数获得相应的加强效果,从而提高了用户体验效果。In this embodiment, by adopting the foregoing technical solution, corresponding processing can be performed on an image to be processed according to coefficients, and accordingly a corresponding strengthening effect can be obtained according to different coefficients, thereby improving a user experience effect.
在一个实施例中,所述第一处理是由多个图像子处理所组成的处理集合,如图1b所示,步骤S2中的第一处理包括:In one embodiment, the first process is a processing set composed of multiple image sub-processes. As shown in FIG. 1b, the first process in step S2 includes:
S201,将原始图像分割为多个图像区域;S201. Divide the original image into multiple image regions.
S202,获取/删除原始图像中的一个或多个图像区域,得到中间图像;S202. Acquire / delete one or more image regions in the original image to obtain an intermediate image.
S203,对所述中间图像进行模糊处理。S203: Perform blur processing on the intermediate image.
在该实施例中,第一处理是分割处理、获取/删除处理、模糊处理组成的处理集合。In this embodiment, the first process is a processing set consisting of a division process, an acquisition / deletion process, and a blur process.
在执行第一处理时,首先将图像进行分割,所述分割可以是按照预先设定的规则进行分割或者由用户手动划定分割区域或范围;分割之后,获取需要保留的分割区域或者删除需要去除的分割区域,所述获取或删除可以按照预先设定的规则执行,或者由用户手动选择需要获取或删除的分割区域;最后,对获取的分割区域或者删除之后剩余的分割区域进行模糊处理。When the first process is performed, the image is first segmented. The segmentation may be segmented according to a preset rule or manually divided by a user to define a segmented area or range. After segmentation, the segmented area that needs to be retained or deleted needs to be removed. The acquisition or deletion may be performed according to a preset rule, or the user may manually select the division area to be acquired or deleted; finally, blur the acquired division area or the remaining division area after deletion.
执行所述分割处理时,根据预定的分割规则对图像进行分割,举例来说,可以通过图像上的关键点将图像分割成多个图像区域;执行获取处理 时,接收选择指令,所述选择指令用于选择一个或多个图像区域,将所选择的一个或多个图像区域作为中间图像;执行删除处理时,接收选择指令,所述选择指令用于选择一个或多个图像区域,将所选择的一个或多个图像区域删除,剩余的图像作为中间图像;最后对中间图像进行模糊处理,得到第二图像。When the segmentation process is performed, the image is segmented according to a predetermined segmentation rule. For example, an image may be segmented into multiple image regions by key points on the image; when the acquisition process is performed, a selection instruction is received, the selection instruction It is used to select one or more image regions, and use the selected one or more image regions as an intermediate image. When the deletion process is performed, a selection instruction is received, and the selection instruction is used to select one or more image regions to select the selected one or more image regions. One or more image regions are deleted, and the remaining images are used as intermediate images; finally, the intermediate images are blurred to obtain a second image.
在该实施例中,所述处理集合可以是固定的也可以的可配置的,举例来说,如果需要使用同一配置批量处理很多图像,可以使用固定的处理集合来防止处理错误;也可以提供多个处理集合,并给用户提供每个处理集合的处理效果,供用户选择,以提供灵活性。In this embodiment, the processing set can be fixed or configurable. For example, if you need to process many images in batches using the same configuration, you can use a fixed processing set to prevent processing errors; you can also provide multiple Each processing set, and provide users with the processing effect of each processing set, for users to choose, to provide flexibility.
所述处理集合中的处理流程也可以是固定的或者是可配置的,举例来说,如图2所示,可以预先设置一些典型的处理集合,提供第一处理的人机交互界面,供用户从多个处理集合中选择一个处理集合作为当前配置的第一处理,如图2所示,第一处理包括了第一集合、第二集合、第三集合和第四集合,供用户选择,用户可以选择任意一个集合作为第一处理,也可以选择几个集合联合起来作为第一处理;另外,如图2所示,也可以在第一处理的人机交互界面中提供多个第一处理的子处理,用户可以对这些子处理自由组合并指定子处理之间的处理顺序,以形成自定义的第一处理,如图2所示的第一集合中的分割处理、选择处理和模糊处理,用户可以增加和删除其中的一个子处理,并且可以调整子处理之间的顺序,如图2所示,用户可以将选择处理和分割处理交换顺序,此时,需要用户先选择需要做分割处理的图像,再对选择的图像进行分割处理,并且用户可以通过预览来预先了解自定义的处理流程得处理效果。可以理解是的,通过人机交互界面来配置第一处理只是一种实施方式,本领域技术人员可以通过任意合适的方式对第一处理进行配置;对第一处理的配置方式也不限定为上述方式,处理集合之间也可以组合形成第一处理,本申请不做更多限定,在此仅表明第一处理可以根据需要预先设定或者动态配置。The processing flow in the processing set may also be fixed or configurable. For example, as shown in FIG. 2, some typical processing sets may be set in advance to provide a human-computer interaction interface for the first processing for users. Select a processing set from a plurality of processing sets as the first configuration of the current configuration. As shown in FIG. 2, the first processing includes a first set, a second set, a third set, and a fourth set for the user to select. You can choose any one set as the first process, or you can select several sets to be combined as the first process. In addition, as shown in Figure 2, multiple first processes can also be provided in the human-machine interaction interface of the first process. Sub-processing. The user can freely combine these sub-processing and specify the processing order between the sub-processing to form a customized first processing, such as segmentation processing, selection processing and blur processing in the first set shown in FIG. 2, The user can add and delete one of the sub-processes, and can adjust the order between the sub-processes. As shown in Figure 2, the user can exchange the selection process and the split process. Sequence, this time, the user needs to do to select the image division processing, and then the selected image segmentation processing, and the user may know in advance the processing flow of the custom effect obtained by the preview processing. It can be understood that the configuration of the first process through the human-computer interaction interface is only one implementation manner, and those skilled in the art can configure the first process in any suitable manner; the configuration of the first process is not limited to the above. By way of example, the processing sets can also be combined to form the first processing, which is not limited in this application, and only indicates that the first processing can be preset or dynamically configured as required.
本领域技术人员应清楚,上述第一处理中的子处理方式并非穷举,本领域技术人员在上述所列方式的基础上还可以进行简单变换或等同替换,这些简单变换或等同替换也应包含在本公开的保护范围之内。Those skilled in the art should know that the sub-processing methods in the first process are not exhaustive. Those skilled in the art can also perform simple transformations or equivalent replacements on the basis of the methods listed above. These simple transformations or equivalent replacements should also include Within the scope of this disclosure.
由此可见,本公开通过采取上述技术方案,根据用户发出的选择指令,对第一处理进行配置,由此使得用户可以根据自己的需要对图形处理的效果进行调整,从而提高了用户体验效果。It can be seen that, by adopting the above technical solution, the present disclosure configures the first process according to the selection instruction issued by the user, thereby enabling the user to adjust the effect of the graphic processing according to his own needs, thereby improving the user experience effect.
在一个可选的实施例中,如图3所示,基于图1b所示实施例,以人脸 图像为例,描述一个完整的图像处理实施例:In an optional embodiment, as shown in FIG. 3, based on the embodiment shown in FIG. 1b, a face image is taken as an example to describe a complete image processing embodiment:
步骤S301,获取人脸图像;Step S301, obtaining a face image;
所述人脸图像可以是用于使用移动终端的图像传感器,如摄像头等获取到的自拍图像;The face image may be a self-portrait image obtained by using an image sensor of a mobile terminal, such as a camera;
步骤302,定位人脸的关键点,并将人脸划分为面部区域和五官区域;Step 302: locate key points of the face, and divide the face into a facial area and a facial feature area;
当接收到所述人脸图像之后,定位人脸图像上的关键点;所述关键点为脸部轮廓的关键点和五官的关键点,由此,可以将人脸分割为五官区域和面部区域;可以理解的是,此处区域的分割是可以动态配置的,用户可以根据需要预先配置需要划分的区域,也可以手动划分区域,在手动划分区域的情况下,无需定位关键点;After receiving the face image, the key points on the face image are located; the key points are the key points of the facial contour and the key points of the facial features, so that the human face can be divided into facial features and facial regions It can be understood that the division of the area here can be dynamically configured. The user can pre-configure the area to be divided according to the needs, or manually divide the area. In the case of manually dividing the area, there is no need to locate key points;
步骤303,接收选择指令,所述选择指令选择五官区域和/或面部区域;Step 303: Receive a selection instruction that selects the facial features area and / or the facial area;
举例来说,选择指令选择了眼睛和鼻子;可以理解的是,此处的选择指令可以是选择任意划分好的区域。For example, the selection instruction selects the eyes and the nose; it can be understood that the selection instruction here may be to select an arbitrarily divided area.
步骤304,删除所述选择指令所选择的区域,剩下的图像作为中间图像。In step 304, the area selected by the selection instruction is deleted, and the remaining image is used as an intermediate image.
此处以删除处理为例,在选择了眼睛和鼻子之后,将眼睛和鼻子的图像删除,剩下的图像为中间图像。Here, the deletion process is taken as an example. After the eyes and the nose are selected, the images of the eyes and the nose are deleted, and the remaining images are intermediate images.
步骤305,对中间图像进行模糊处理,得到第二图像。Step 305: Blur the intermediate image to obtain a second image.
在该实施例中,所述模糊处理为:根据图像中当前像素点的值与其周围相邻像素点的值计算平均值,将所述平均值作为当前像素点的值,对图像中所有的像素点遍历上述操作,得到的结果就是模糊处理之后的图像。In this embodiment, the blurring process is: calculating an average value according to the value of the current pixel point in the image and the values of neighboring pixels, and using the average value as the value of the current pixel point for all pixels in the image Point traversing the above operation, the result is the image after blur processing.
在该实施例中,上述模糊处理中计算平均值的过程为:计算平滑矩阵,将图像当前的像素点的值和其周围相邻像素点的值与平滑矩阵做卷积计算,得到平均值。In this embodiment, the process of calculating the average value in the above-mentioned blurring process is: calculating a smoothing matrix, performing convolution calculation on the value of the current pixel point of the image and the values of neighboring pixel points around the image with the smoothing matrix to obtain the average value.
以下举例说明上述模糊处理的过程:The following examples illustrate the process of the above blurring process:
利用高斯分布公式计算平滑矩阵:Calculate the smoothing matrix using the Gaussian distribution formula:
Figure PCTCN2019073069-appb-000001
其中σ为正态分布的标准偏差,x和y分别为图像中像素点的x轴坐标和y轴坐标,在此取σ=1,则上述公式变换为:
Figure PCTCN2019073069-appb-000001
Where σ is the standard deviation of the normal distribution, and x and y are the x-axis and y-axis coordinates of the pixel points in the image, where σ = 1 is taken, the above formula is transformed into:
Figure PCTCN2019073069-appb-000002
Figure PCTCN2019073069-appb-000002
设当前像素点的坐标为(0,0),则该像素点与其周围像素点的坐标可以用下边的矩阵表示:Let the coordinates of the current pixel be (0,0), then the coordinates of the pixel and its surrounding pixels can be represented by the following matrix:
Figure PCTCN2019073069-appb-000003
Figure PCTCN2019073069-appb-000003
由此,每个点的x 2+y 2的值可以由以下矩阵来表示: Therefore, the value of x 2 + y 2 at each point can be represented by the following matrix:
Figure PCTCN2019073069-appb-000004
Figure PCTCN2019073069-appb-000004
则,根据高斯分布公式得到高斯分布矩阵为:Then, the Gaussian distribution matrix obtained according to the Gaussian distribution formula is:
Figure PCTCN2019073069-appb-000005
Figure PCTCN2019073069-appb-000005
对该矩阵进行归一化,得到平滑矩阵:Normalize this matrix to get a smooth matrix:
Figure PCTCN2019073069-appb-000006
Figure PCTCN2019073069-appb-000006
利用图像的像素点的值组成的矩阵与该平滑矩阵进行卷积计算,得到该像素点的平均值,平滑矩阵中的值称为平滑系数,假设上述中间图像中的一个像素点及其相邻像素点的值如下矩阵所示:Convolution calculation is performed by using a matrix composed of pixel values of the image and the smoothing matrix to obtain the average value of the pixels. The values in the smoothing matrix are called smoothing coefficients. Assume that one pixel in the above intermediate image and its neighbors The pixel values are shown in the following matrix:
Figure PCTCN2019073069-appb-000007
Figure PCTCN2019073069-appb-000007
则值为103的像素点,通过模糊处理之后的值为:The pixel value is 103, and the value after blurring is:
100*0.075+102*0.124+110*0.075+105*0.124+103*0.204+112*0.124+104*0.075+106*0.124+100*0.075=105;100 * 0.075 + 102 * 0.124 + 110 * 0.075 + 105 * 0.124 + 103 * 0.204 + 112 * 0.124 + 104 * 0.075 + 106 * 0.124 + 100 * 0.075 = 105;
对中间图像中的每一个像素点均做上述模糊处理,得到模糊之后的中间图像即为将原始图像进行第一处理之后得到的第一图像。The above blur processing is performed on each pixel point in the intermediate image, and the obtained intermediate image after the blur is the first image obtained after the original image is subjected to the first processing.
上述计算过程可以进一步优化,上述平滑矩阵为一个3*3的二维矩阵,每个像素点需要计算9次乘法和8次加法,计算量很大。可以通过将上述二维矩阵变换成两个一维的1*3的矩阵,每个像素只需要分别在X方向和Y方向上进行3次乘法和2次加法,累计进行6次乘法和4次加法即可,更近一步的,通过观察可以发现,在卷积的过程中,所有的乘运算都是发生在平滑系数和像素值之间,而且平滑系数是固定的:对于1*3的平滑矩阵,只 有3个不同的平滑系数,而像素的值范围也是固定的:0~255,总共256个值。因此,所有平滑系数和像素值的乘积只有3*256=768种不同的结果,所以奖这768个结果保存在一张表中,用的时候直接查表即可,最终上述9次乘法和8次加法进一步缩减为4次加法,可以大大减少计算量。The above calculation process can be further optimized. The above-mentioned smoothing matrix is a 3 * 3 two-dimensional matrix. Each pixel needs to calculate 9 multiplications and 8 additions, which is a large amount of calculation. By transforming the above two-dimensional matrix into two one-dimensional 1 * 3 matrices, each pixel only needs to perform 3 multiplications and 2 additions in the X and Y directions, respectively, and perform 6 multiplications and 4 totals. Addition is just a step closer. Through observation, we can find that during the convolution process, all multiplication operations occur between the smoothing coefficient and the pixel value, and the smoothing coefficient is fixed: for 1 * 3 smoothing The matrix has only three different smoothing coefficients, and the pixel value range is also fixed: 0 to 255, for a total of 256 values. Therefore, the product of all smoothing coefficients and pixel values is only 3 * 256 = 768 different results, so the 768 results are saved in a table, which can be checked directly when used. In the end, the above 9 multiplications and 8 additions Further reduction to 4 additions can greatly reduce the amount of calculations.
步骤306,对原始人脸图像进行第二处理,得到第二图像。Step 306: Perform a second process on the original face image to obtain a second image.
在该实施例中,系数α=0.6,常数c=0.1,则β=1-α+c=0.5,则经过第二处理后的图像为:(原始图像-第一图像*0.6)*2,In this embodiment, if the coefficient α = 0.6 and the constant c = 0.1, then β = 1-α + c = 0.5, then the image after the second processing is: (original image-first image * 0.6) * 2,
其中,第一图像实际上是中间图像的低频分量,(原始图像-第一图像*0.6)则是保留鼻子和眼睛部分的原始图像,其余部分的人脸图像减去低频分量的0.6倍,保留高频分量和少部分低频分量,将最终的结果乘以放大系数2,得到最终的结果,其结果是将人脸的部分图像进行锐化,使鼻子和眼睛以及脸部磨皮之后的部分更加清晰。Among them, the first image is actually the low-frequency component of the intermediate image, (original image-first image * 0.6) is the original image with the nose and eye parts retained, and the rest of the face image is reduced by 0.6 times the low-frequency component, which is retained High frequency components and a small number of low frequency components. Multiply the final result by the magnification factor 2 to get the final result. The result is to sharpen part of the face's image, making the nose and eyes, and the part after facial dermabrasion more Clear.
在该实施例中,用户使用具备摄像头以及触控屏的移动终端运行该图像方法,通过摄像头获取到用户的人脸图像,用户点选鼻子和眼睛,并将其删除,之后拖动表示锐化程度的滑动控件,则屏幕自动显示经过锐化处理之后的人脸图像,随后用户可以继续拖动滑动控件,并实时查看锐化的效果。In this embodiment, the user runs the image method using a mobile terminal with a camera and a touch screen. The user's face image is obtained through the camera. The user clicks on the nose and eyes, deletes them, and then drags to indicate sharpening. Degree of sliding control, the screen automatically displays the face image after sharpening, and then the user can continue to drag the sliding control and view the sharpening effect in real time.
本实施例通过选择指令选择需要处理的图像区域,通过滑动控件控制处理结果的强化程度,并且可以使用户实时预览处理结果,从而提高了图像处理的灵活性。In this embodiment, the image area to be processed is selected by a selection instruction, the degree of enhancement of the processing result is controlled by a sliding control, and the user can preview the processing result in real time, thereby improving the flexibility of image processing.
本领域技术人员应能理解,在上述各个实施例的基础上,还可以进行明显变型(例如,对所列举的模式进行组合)或等同替换,例如,本领域技术人员可以在上述实施例的基础上,再结合多次的图像处理结果,使用上述方案对多个图像结合处理。Those skilled in the art should understand that on the basis of the above embodiments, obvious modifications (for example, combining the listed modes) or equivalent replacements can also be performed. For example, those skilled in the art can In the above, multiple image processing results are combined, and the above scheme is used to combine and process multiple images.
在上文中,虽然按照上述的顺序描述了图像处理方法实施例中的各个步骤,本领域技术人员应清楚,本公开实施例中的步骤并不必然按照上述顺序执行,其也可以倒序、并行、交叉等其他顺序执行,而且,在上述步骤的基础上,本领域技术人员也可以再加入其他步骤,这些明显变型或等同替换的方式也应包含在本公开的保护范围之内,在此不再赘述。In the foregoing, although the steps in the embodiment of the image processing method have been described in the order described above, those skilled in the art should be aware that the steps in the embodiments of the present disclosure are not necessarily performed in the order described above, and they can also be reversed, parallel, Interleaving and other other orders are performed, and on the basis of the above steps, those skilled in the art can also add other steps. These obvious modifications or equivalent replacements should also be included in the scope of protection of this disclosure, and will not be repeated here. To repeat.
下面为本公开装置实施例,本公开装置实施例可用于执行本公开方法实施例实现的步骤,为了便于说明,仅示出了与本公开实施例相关的部分,具体技术细节未揭示的,请参照本公开方法实施例。The following is a device embodiment of the present disclosure. The device embodiment of the present disclosure can be used to perform the steps implemented by the method embodiments of the present disclosure. For convenience of explanation, only parts related to the embodiments of the present disclosure are shown. Specific technical details are not disclosed. Reference is made to the method embodiments of the present disclosure.
为了解决如何提高图像处理的灵活性的技术问题,本公开实施例提供 一种图像处理装置。该装置可以执行上述图像处理方法实施例中所述的步骤。如图4所示,该装置包括:接收模块41、第一处理模块42和第二处理模块43。In order to solve the technical problem of how to improve the flexibility of image processing, an embodiment of the present disclosure provides an image processing apparatus. The apparatus may perform the steps described in the above-mentioned embodiment of the image processing method. As shown in FIG. 4, the device includes a receiving module 41, a first processing module 42, and a second processing module 43.
接收模块41,用于接收原始图像;A receiving module 41, configured to receive an original image;
第一处理模块42,用于对原始图像进行第一处理,得到第一图像;A first processing module 42 configured to perform a first process on an original image to obtain a first image;
第二处理模块43,用于对原始图像进行第二处理,得到第二图像,其中所述第二处理为:A second processing module 43 is configured to perform a second process on the original image to obtain a second image, where the second process is:
(原始图像-第一图像*α)/β,其中0<α<1,0<β<1。(Original image-first image * α) / β, where 0 <α <1, 0 <β <1.
其中,所述原始图像可以是从图像传感器中采集到的未经任何处理的视频或者图片等;所述原始图像也可以从其他途径得到的视频或图片,如从网络服务器中下载得到或者从可移动存储器中读取得到;总之,原始图像并不限定为未经过任何处理的图像,而是指未经本申请实施例所述的图像处理方法处理过的图像。The original image may be an unprocessed video or picture collected from an image sensor. The original image may also be a video or picture obtained from other sources, such as downloaded from a network server or from a web server. Readable from a mobile memory; in short, the original image is not limited to an image that has not undergone any processing, but refers to an image that has not been processed by the image processing method described in the embodiments of the present application.
在一个实施例中,所述第一处理可以是任一种对图像的处理类型,举例来说,所述第一处理可以是模糊处理,即提取图像的低频分量;可以是分割处理,即将图像分割成多个不同的区域;可以是压缩处理,即对图像进行压缩,使其体积变小。所述第一处理的处理类型是可配置的,举例来说,可提供第一处理的人机交互界面,供用户从多个第一处理中选择一个作为当前配置的第一处理。In one embodiment, the first processing may be any type of processing on an image. For example, the first processing may be blur processing, that is, extracting low-frequency components of an image; it may be segmentation processing, that is, image Divided into a number of different regions; it can be a compression process, which compresses the image to make it smaller. The processing type of the first processing is configurable. For example, a human-machine interaction interface of the first processing may be provided for a user to select one of a plurality of first processings as the currently configured first processing.
在一个实施例中,所述第一处理模块42包括:第一模糊处理模块421,用于对原始图像进行模糊处理In one embodiment, the first processing module 42 includes: a first blur processing module 421, configured to perform blur processing on the original image
在一个实施例中,所述模糊处理为:根据图像中当前像素点的值与其周围相邻像素点的值计算平均值,将所述平均值作为当前像素点的值,对图像中所有的像素点遍历上述操作,得到的结果就是模糊处理之后的图像;在该实施例中,上述模糊处理中计算平均值的过程为:计算平滑矩阵,将图像当前的像素点的值和其周围相邻像素点的值与平滑矩阵做卷积计算,得到平均值。In one embodiment, the blurring process is: calculating an average value according to the value of the current pixel point in the image and the values of neighboring pixels, and using the average value as the value of the current pixel point for all pixels in the image Point traversing the above operation, the result obtained is the image after blur processing; in this embodiment, the process of calculating the average value in the above blur processing is: calculating a smoothing matrix, comparing the current pixel value of the image and its neighboring pixels Convolution calculations are performed on the values of the points and the smoothing matrix to get the average.
在一个实施例中,所述系数β的值与系数α的值相关联,其满足一定的函数关系,该关系可以根据需要来设定,可以提供系数人机交互界面,供用户调节α和β的关系。In one embodiment, the value of the coefficient β is associated with the value of the coefficient α, which satisfies a certain functional relationship. The relationship can be set as needed. A human-computer interaction interface for the coefficient can be provided for users to adjust α and β. Relationship.
在一个实施例中,上述函数关系为:β=1-α。随着α的增大,β不断减小,1/β不断变大。In one embodiment, the above functional relationship is: β = 1-α. As α increases, β decreases, and 1 / β becomes larger.
在一实施例中,由于0<α<1,因此当α无限接近于1时,1/β将变得很大,容易导致错误。因此,优选的,β=1-α+c,其中c为常数,常数c的存在,保证1/β不会变成无穷大In an embodiment, since 0 <α <1, when α is infinitely close to 1, 1 / β will become very large, which may easily cause errors. Therefore, it is preferable that β = 1-α + c, where c is a constant, and the existence of the constant c ensures that 1 / β does not become infinite.
在一个实施例中,所述α的值可以动态调整,举例来说,用户可以通过系数配置人机交互界面对α值进行调整,人机交互界面接收到用户发送的系数配置指令,并根据该系数配置指令配置α的值;具体的,所述人机交互界面可以是滑动控件,比如滑块,滑块的原始距离为原点,用户拖动滑块离开原点的距离与α的值正相关,滑块离原点越远,α的值越大;或者滑动控件可以是旋钮,旋钮的初始角度为0°,用户拖动旋钮旋转,旋钮的旋转的角度越大,α的值越大。In one embodiment, the value of α can be dynamically adjusted. For example, the user can adjust the value of α through a coefficient configuration human-computer interaction interface. The human-computer interaction interface receives a coefficient configuration instruction sent by the user, and The coefficient configuration instruction configures the value of α; specifically, the human-computer interaction interface may be a sliding control, such as a slider, and the original distance of the slider is the origin. The distance the user drags the slider away from the origin is positively related to the value of α. The farther the slider is from the origin, the greater the value of α; or the sliding control can be a knob. The initial angle of the knob is 0 °. When the user drags the knob to rotate, the greater the angle of rotation of the knob, the greater the value of α.
本实施例通过采取上述技术方案,可以根据系数,对待处理图像进行相应的处理,由此可以根据不同的系数获得相应的加强效果,从而提高了用户体验效果。In this embodiment, by adopting the foregoing technical solution, corresponding processing can be performed on an image to be processed according to coefficients, and accordingly a corresponding strengthening effect can be obtained according to different coefficients, thereby improving a user experience effect.
在一个实施例中,所述第一处理模块42包括:分割模块422,用于将原始图像分割为多个图像区域;中间处理模块423,用于获取/删除原始图像中的一个或多个图像区域,得到中间图像;第二模糊处理模块424,用于对所述中间图像进行模糊处理。In one embodiment, the first processing module 42 includes a segmentation module 422 for segmenting the original image into multiple image regions, and an intermediate processing module 423 for acquiring / deleting one or more images in the original image. Area to obtain an intermediate image; a second blur processing module 424 is configured to perform blur processing on the intermediate image.
在一个实施例中,所述中间处理模块423包括:中间图像选择模块4231,用于选择所述图像中的一个或多个图像区域;将所选择的一个或多个图像区域作为中间图像;或者,将所选择的一个或多个图像区域删除,剩余的图像作为中间图像。In one embodiment, the intermediate processing module 423 includes: an intermediate image selection module 4231 for selecting one or more image regions in the image; and using the selected one or more image regions as an intermediate image; or , Delete one or more selected image regions, and use the remaining images as intermediate images.
在该实施例中,所述原始图像可以为人脸图像;所述图像中的一个部分或多个部分为人脸图像中的面部和五官。In this embodiment, the original image may be a face image; one or more parts of the image are faces and features in the face image.
在该实施例中,第一处理是分割处理、获取/删除处理、模糊处理组成的处理集合。In this embodiment, the first process is a processing set consisting of a division process, an acquisition / deletion process, and a blur process.
在该实施例中,所述处理集合中的处理流程可以是固定的或者是可配置的,举例来说,可以预先设置一些典型的处理集合,提供第一处理的人机交互界面,供用户从多个处理集合中选择一个处理集合作为当前配置的第一处理;也可以在第一处理的人机交互界面中提供多个第一处理的子处理,用户可以对这些子处理自由组合并指定子处理之间的处理顺序,以形成自定义的第一处理。In this embodiment, the processing flow in the processing set may be fixed or configurable. For example, some typical processing sets may be set in advance to provide a human-computer interaction interface for the first processing, for users to download from Select one processing set from multiple processing sets as the first process of the current configuration. You can also provide multiple sub-processes of the first process in the human-computer interaction interface of the first process. Users can freely combine these sub-processes and specify sub-processes. Processing order between processes to form a custom first process.
由此可见,本公开通过采取上述技术方案,根据用户发出的选择指令, 对第一处理进行配置,由此使得用户可以根据自己的需要对图形处理的效果进行调整,从而提高了用户体验效果。It can be seen that, by adopting the above technical solution, the present disclosure configures the first process according to the selection instruction issued by the user, thereby enabling the user to adjust the effect of the graphic processing according to his own needs, thereby improving the user experience effect.
图5是图示根据本公开的实施例的电子设备的硬件框图。如图5所示,根据本公开实施例的电子设备50包括存储器51和处理器52。FIG. 5 is a hardware block diagram illustrating an electronic device according to an embodiment of the present disclosure. As shown in FIG. 5, the electronic device 50 according to an embodiment of the present disclosure includes a memory 51 and a processor 52.
该存储器51用于存储非暂时性计算机可读指令。具体地,存储器51可以包括一个或多个计算机程序产品,该计算机程序产品可以包括各种形式的计算机可读存储介质,例如易失性存储器和/或非易失性存储器。该易失性存储器例如可以包括随机存取存储器(RAM)和/或高速缓冲存储器(cache)等。该非易失性存储器例如可以包括只读存储器(ROM)、硬盘、闪存等。The memory 51 is configured to store non-transitory computer-readable instructions. Specifically, the memory 51 may include one or more computer program products, which may include various forms of computer-readable storage media, such as volatile memory and / or non-volatile memory. The volatile memory may include, for example, a random access memory (RAM) and / or a cache memory. The non-volatile memory may include, for example, a read-only memory (ROM), a hard disk, a flash memory, and the like.
该处理器52可以是中央处理单元(CPU)或者具有数据处理能力和/或指令执行能力的其它形式的处理单元,并且可以控制电子设备50中的其它组件以执行期望的功能。在本公开的一个实施例中,该处理器52用于运行该存储器51中存储的该计算机可读指令,使得该电子设备50执行前述的本公开各实施例的图像处理方法的全部或部分步骤。The processor 52 may be a central processing unit (CPU) or other form of processing unit having data processing capabilities and / or instruction execution capabilities, and may control other components in the electronic device 50 to perform desired functions. In an embodiment of the present disclosure, the processor 52 is configured to run the computer-readable instructions stored in the memory 51 so that the electronic device 50 executes all or part of the steps of the image processing method of the foregoing embodiments of the present disclosure. .
本领域技术人员应能理解,为了解决如何获得良好用户体验效果的技术问题,本实施例中也可以包括诸如通信总线、接口等公知的结构,这些公知的结构也应包含在本公开的保护范围之内。Those skilled in the art should understand that, in order to solve the technical problem of how to obtain a good user experience effect, this embodiment may also include well-known structures such as a communication bus and an interface. These well-known structures should also be included in the protection scope of the present disclosure within.
有关本实施例的详细说明可以参考前述各实施例中的相应说明,在此不再赘述。For detailed descriptions of this embodiment, reference may be made to corresponding descriptions in the foregoing embodiments, and details are not described herein again.
图6是图示根据本公开的实施例的计算机可读存储介质的示意图。如图6所示,根据本公开实施例的计算机可读存储介质60,其上存储有非暂时性计算机可读指令61。当该非暂时性计算机可读指令61由处理器运行时,执行前述的本公开各实施例的图像处理方法的全部或部分步骤。FIG. 6 is a schematic diagram illustrating a computer-readable storage medium according to an embodiment of the present disclosure. As shown in FIG. 6, a computer-readable storage medium 60 according to an embodiment of the present disclosure has non-transitory computer-readable instructions 61 stored thereon. When the non-transitory computer-readable instruction 61 is executed by a processor, all or part of the steps of the image processing method of the foregoing embodiments of the present disclosure are performed.
上述计算机可读存储介质60包括但不限于:光存储介质(例如:CD-ROM和DVD)、磁光存储介质(例如:MO)、磁存储介质(例如:磁带或移动硬盘)、具有内置的可重写非易失性存储器的媒体(例如:存储卡)和具有内置ROM的媒体(例如:ROM盒)。The computer-readable storage medium 60 includes, but is not limited to, optical storage media (for example, CD-ROM and DVD), magneto-optical storage media (for example, MO), magnetic storage media (for example, magnetic tape or mobile hard disk), Non-volatile memory rewritable media (for example: memory card) and media with built-in ROM (for example: ROM box).
有关本实施例的详细说明可以参考前述各实施例中的相应说明,在此不再赘述。For detailed descriptions of this embodiment, reference may be made to corresponding descriptions in the foregoing embodiments, and details are not described herein again.
图7是图示根据本公开实施例的终端设备的硬件结构示意图。如图7所示,该图像处理终端70包括上述图像处理装置实施例。FIG. 7 is a schematic diagram illustrating a hardware structure of a terminal device according to an embodiment of the present disclosure. As shown in FIG. 7, the image processing terminal 70 includes the foregoing image processing apparatus embodiment.
该终端设备可以以各种形式来实施,本公开中的终端设备可以包括但不限于诸如移动电话、智能电话、笔记本电脑、数字广播接收器、PDA(个人数字助理)、PAD(平板电脑)、PMP(便携式多媒体播放器)、导航装置、车载终端设备、车载显示终端、车载电子后视镜等等的移动终端设备以及诸如数字TV、台式计算机等等的固定终端设备。The terminal device may be implemented in various forms, and the terminal device in the present disclosure may include, but is not limited to, such as a mobile phone, a smart phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), Mobile terminal devices such as PMPs (portable multimedia players), navigation devices, vehicle-mounted terminal devices, vehicle-mounted display terminals, vehicle-mounted electronic rear-view mirrors, and the like, and fixed terminal devices such as digital TVs, desktop computers, and the like.
作为等同替换的实施方式,该终端还可以包括其他组件。如图7所示,该图像处理终端70可以包括电源单元71、无线通信单元72、A/V(音频/视频)输入单元73、用户输入单元74、感测单元75、接口单元76、控制器77、输出单元78和存储单元79等等。图7示出了具有各种组件的终端,但是应理解的是,并不要求实施所有示出的组件,也可以替代地实施更多或更少的组件。As an equivalent alternative implementation, the terminal may further include other components. As shown in FIG. 7, the image processing terminal 70 may include a power supply unit 71, a wireless communication unit 72, an A / V (audio / video) input unit 73, a user input unit 74, a sensing unit 75, an interface unit 76, and a controller. 77, an output unit 78, a storage unit 79, and so on. FIG. 7 shows a terminal having various components, but it should be understood that it is not required to implement all the illustrated components, and more or fewer components may be implemented instead.
其中,无线通信单元72允许终端70与无线通信***或网络之间的无线电通信。A/V输入单元73用于接收音频或视频信号。用户输入单元74可以根据用户输入的命令生成键输入数据以控制终端设备的各种操作。感测单元75检测终端70的当前状态、终端70的位置、用户对于终端70的触摸输入的有无、终端70的取向、终端70的加速或减速移动和方向等等,并且生成用于控制终端70的操作的命令或信号。接口单元76用作至少一个外部装置与终端70连接可以通过的接口。输出单元78被构造为以视觉、音频和/或触觉方式提供输出信号。存储单元79可以存储由控制器77执行的处理和控制操作的软件程序等等,或者可以暂时地存储己经输出或将要输出的数据。存储单元79可以包括至少一种类型的存储介质。而且,终端70可以与通过网络连接执行存储单元79的存储功能的网络存储装置协作。控制器77通常控制终端设备的总体操作。另外,控制器77可以包括用于再现或回放多媒体数据的多媒体模块。控制器77可以执行模式识别处理,以将在触摸屏上执行的手写输入或者图片绘制输入识别为字符或图像。电源单元71在控制器77的控制下接收外部电力或内部电力并且提供操作各元件和组件所需的适当的电力。Among them, the wireless communication unit 72 allows radio communication between the terminal 70 and a wireless communication system or network. The A / V input unit 73 is used to receive audio or video signals. The user input unit 74 may generate key input data according to a command input by the user to control various operations of the terminal device. The sensing unit 75 detects the current state of the terminal 70, the position of the terminal 70, the presence or absence of a user's touch input to the terminal 70, the orientation of the terminal 70, the acceleration or deceleration movement and direction of the terminal 70, and the like, and generates a terminal for controlling the terminal Command or signal of operation of 70. The interface unit 76 functions as an interface through which at least one external device can connect with the terminal 70. The output unit 78 is configured to provide an output signal in a visual, audio, and / or tactile manner. The storage unit 79 may store software programs and the like for processing and control operations performed by the controller 77, or may temporarily store data that has been output or is to be output. The storage unit 79 may include at least one type of storage medium. Moreover, the terminal 70 can cooperate with a network storage device that performs a storage function of the storage unit 79 through a network connection. The controller 77 generally controls the overall operation of the terminal device. In addition, the controller 77 may include a multimedia module for reproducing or playing back multimedia data. The controller 77 may perform a pattern recognition process to recognize a handwriting input or a picture drawing input performed on the touch screen as a character or an image. The power supply unit 71 receives external power or internal power under the control of the controller 77 and provides appropriate power required to operate each element and component.
本公开提出的图像处理方法的各种实施方式可以使用例如计算机软件、硬件或其任何组合的计算机可读介质来实施。对于硬件实施,本公开提出的图像处理方法的各种实施方式可以通过使用特定用途集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理装置(DSPD)、可编程逻辑装置(PLD)、现场可编程门阵列(FPGA)、处理器、控制器、微控制器、微处理器、被设计为执行这里描述的功能的电子单元中的至少一种来实施,在一些情况下,本公开提出的视频特征的比对方法的各种实施方式可以在控 制器77中实施。对于软件实施,本公开提出的图像处理方法的各种实施方式可以与允许执行至少一种功能或操作的单独的软件模块来实施。软件代码可以由以任何适当的编程语言编写的软件应用程序(或程序)来实施,软件代码可以存储在存储单元79中并且由控制器77执行。Various embodiments of the image processing method proposed by the present disclosure may be implemented using a computer-readable medium such as computer software, hardware, or any combination thereof. For hardware implementation, various embodiments of the image processing method proposed by the present disclosure can be implemented by using an application specific integrated circuit (ASIC), a digital signal processor (DSP), a digital signal processing device (DSPD), and a programmable logic device (PLD). , Field programmable gate array (FPGA), processor, controller, microcontroller, microprocessor, electronic unit designed to perform the functions described herein, and in some cases, the present disclosure Various embodiments of the proposed comparison method of video features may be implemented in the controller 77. For software implementation, various embodiments of the image processing method proposed by the present disclosure may be implemented with a separate software module that allows execution of at least one function or operation. The software codes may be implemented by a software application program (or program) written in any suitable programming language, and the software codes may be stored in the storage unit 79 and executed by the controller 77.
有关本实施例的详细说明可以参考前述各实施例中的相应说明,在此不再赘述。For detailed descriptions of this embodiment, reference may be made to corresponding descriptions in the foregoing embodiments, and details are not described herein again.
以上结合具体实施例描述了本公开的基本原理,但是,需要指出的是,在本公开中提及的优点、优势、效果等仅是示例而非限制,不能认为这些优点、优势、效果等是本公开的各个实施例必须具备的。另外,上述公开的具体细节仅是为了示例的作用和便于理解的作用,而非限制,上述细节并不限制本公开为必须采用上述具体的细节来实现。The basic principles of the present disclosure have been described above in conjunction with specific embodiments, but it should be noted that the advantages, advantages, effects, etc. mentioned in this disclosure are merely examples and not limitations, and these advantages, advantages, effects, etc. cannot be considered as Required for various embodiments of the present disclosure. In addition, the specific details of the above disclosure are merely for the purpose of illustration and ease of understanding, and are not limiting, and the above details do not limit the present disclosure to the implementation of the above specific details.
本公开中涉及的器件、装置、设备、***的方框图仅作为例示性的例子并且不意图要求或暗示必须按照方框图示出的方式进行连接、布置、配置。如本领域技术人员将认识到的,可以按任意方式连接、布置、配置这些器件、装置、设备、***。诸如“包括”、“包含”、“具有”等等的词语是开放性词汇,指“包括但不限于”,且可与其互换使用。这里所使用的词汇“或”和“和”指词汇“和/或”,且可与其互换使用,除非上下文明确指示不是如此。这里所使用的词汇“诸如”指词组“诸如但不限于”,且可与其互换使用。The block diagrams of the devices, devices, equipment, and systems involved in this disclosure are only illustrative examples and are not intended to require or imply that they must be connected, arranged, and configured in the manner shown in the block diagrams. As those skilled in the art would realize, these devices, devices, equipment, and systems may be connected, arranged, and configured in any manner. Words such as "including," "including," "having," and the like are open words, meaning "including, but not limited to," and can be used interchangeably with them. As used herein, the words "or" and "and" refer to the words "and / or" and are used interchangeably with each other, unless the context clearly indicates otherwise. The term "such as" as used herein refers to the phrase "such as, but not limited to," and is used interchangeably with it.
另外,如在此使用的,在以“至少一个”开始的项的列举中使用的“或”指示分离的列举,以便例如“A、B或C的至少一个”的列举意味着A或B或C,或AB或AC或BC,或ABC(即A和B和C)。此外,措辞“示例的”不意味着描述的例子是优选的或者比其他例子更好。In addition, as used herein, an "or" used in an enumeration of items beginning with "at least one" indicates a separate enumeration such that, for example, an "at least one of A, B, or C" enumeration means A or B or C, or AB or AC or BC, or ABC (ie A and B and C). Furthermore, the word "exemplary" does not mean that the described example is preferred or better than other examples.
还需要指出的是,在本公开的***和方法中,各部件或各步骤是可以分解和/或重新组合的。这些分解和/或重新组合应视为本公开的等效方案。It should also be noted that, in the system and method of the present disclosure, each component or each step can be disassembled and / or recombined. These decompositions and / or recombinations should be considered as equivalent solutions of the present disclosure.
可以不脱离由所附权利要求定义的教导的技术而进行对在此所述的技术的各种改变、替换和更改。此外,本公开的权利要求的范围不限于以上所述的处理、机器、制造、事件的组成、手段、方法和动作的具体方面。可以利用与在此所述的相应方面进行基本相同的功能或者实现基本相同的结果的当前存在的或者稍后要开发的处理、机器、制造、事件的组成、手段、方法或动作。因而,所附权利要求包括在其范围内的这样的处理、机器、制造、事件的组成、手段、方法或动作。Various changes, substitutions, and alterations to the techniques described herein can be made without departing from the techniques taught by the claims defined below. Further, the scope of the claims of the present disclosure is not limited to the specific aspects of the processes, machines, manufacturing, composition of events, means, methods, and actions described above. A composition, means, method, or action of a process, machine, manufacturing, event that currently exists or is to be developed at a later time may be utilized to perform substantially the same functions or achieve substantially the same results as the corresponding aspects described herein. Accordingly, the appended claims include within their scope such processes, machines, manufacture, compositions of matter, means, methods, or actions.
提供所公开的方面的以上描述以使本领域的任何技术人员能够做出或者使用本公开。对这些方面的各种修改对于本领域技术人员而言是非常显 而易见的,并且在此定义的一般原理可以应用于其他方面而不脱离本公开的范围。因此,本公开不意图被限制到在此示出的方面,而是按照与在此公开的原理和新颖的特征一致的最宽范围。The above description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present disclosure. Various modifications to these aspects will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to other aspects without departing from the scope of the present disclosure. Accordingly, the disclosure is not intended to be limited to the aspects shown herein, but to the broadest scope consistent with the principles and novel features disclosed herein.
为了例示和描述的目的已经给出了以上描述。此外,此描述不意图将本公开的实施例限制到在此公开的形式。尽管以上已经讨论了多个示例方面和实施例,但是本领域技术人员将认识到其某些变型、修改、改变、添加和子组合。The foregoing description has been given for the purposes of illustration and description. Furthermore, this description is not intended to limit the embodiments of the present disclosure to the forms disclosed herein. Although a number of example aspects and embodiments have been discussed above, those skilled in the art will recognize certain variations, modifications, changes, additions and sub-combinations thereof.

Claims (12)

  1. 一种图像处理方法,包括:An image processing method includes:
    接收原始图像;Receive the original image;
    对原始图像进行第一处理,得到第一图像;First processing the original image to obtain a first image;
    对原始图像进行第二处理,得到第二图像,其中所述第二处理为:Perform a second process on the original image to obtain a second image, where the second process is:
    (原始图像-第一图像*α)/β,其中0<α<1,0<β<1。(Original image-first image * α) / β, where 0 <α <1, 0 <β <1.
  2. 如权利要求1所述的图像处理方法,其中:The image processing method according to claim 1, wherein:
    所述β的值与α的值相关联。The value of β is associated with the value of α.
  3. 如权利要求2所述的图像处理方法,其中:The image processing method according to claim 2, wherein:
    所述β=1-α。The β = 1-α.
  4. 如权利要求2所述的图像处理方法,其中:The image processing method according to claim 2, wherein:
    β=1-α+c,其中c为常量,且0<c<1。β = 1-α + c, where c is a constant, and 0 <c <1.
  5. 如权利要求1所述的图像处理方法,其中所述第一处理为:对原始图像进行模糊处理。The image processing method according to claim 1, wherein the first processing is: performing blur processing on the original image.
  6. 如权利要求1所述的图像处理方法,其中所述第一处理为:The image processing method according to claim 1, wherein the first processing is:
    将原始图像分割为多个图像区域;Split the original image into multiple image regions;
    获取/删除原始图像中的一个或多个图像区域,得到中间图像;Acquiring / deleting one or more image regions in the original image to obtain an intermediate image;
    对所述中间图像进行模糊处理。Performing blur processing on the intermediate image.
  7. 如权利要求5或6所述的图像处理方法,其中所述模糊处理为:The image processing method according to claim 5 or 6, wherein the blur processing is:
    根据图像当前像素点的值与其周围相邻像素点的值计算平均值,将所述平均值作为当前像素点的值。An average value is calculated according to the value of the current pixel point of the image and the values of neighboring pixel points around it, and the average value is used as the value of the current pixel point.
  8. 如权利要求7所述的图像处理方法,其中所述计算平均值为:The image processing method according to claim 7, wherein the calculated average value is:
    计算平滑矩阵,将图像当前像素点的值和其周围相邻像素点的值与平滑矩阵做卷积计算,得到平均值。Calculate the smoothing matrix, and convolve the values of the current pixel point of the image and the values of its neighboring pixels with the smoothing matrix to obtain the average value.
  9. 如权利要求6所述的图像处理方法,其中所述获取/删除原始图像中的一个或多个图像区域,得到中间图像包括:The image processing method according to claim 6, wherein the acquiring / deleting one or more image regions in the original image to obtain the intermediate image comprises:
    接收选择指令,所述选择指令用于选择所述图像中的一个或多个图像区域;Receiving a selection instruction, the selection instruction being used to select one or more image regions in the image;
    将所选择的一个或多个图像区域作为中间图像;Using the selected one or more image regions as an intermediate image;
    或者,将所选择的一个或多个图像区域删除,剩余的图像作为中间图像。Alternatively, the selected one or more image regions are deleted, and the remaining images are used as intermediate images.
  10. 一种图像处理装置,包括:An image processing device includes:
    接收模块,用于接收原始图像;A receiving module for receiving an original image;
    第一处理模块,用于对原始图像进行第一处理,得到第一图像;A first processing module, configured to perform first processing on the original image to obtain a first image;
    第二处理模块,用于对原始图像进行第二处理,得到第二图像,其中所 述第二处理为:A second processing module, configured to perform a second processing on the original image to obtain a second image, where the second processing is:
    (原始图像-第一图像*α)/β,其中0<α<1,0<β<1。(Original image-first image * α) / β, where 0 <α <1, 0 <β <1.
  11. 一种电子设备,包括:An electronic device includes:
    至少一个处理器;以及,At least one processor; and
    与所述至少一个处理器通信连接的存储器;其中,A memory connected in communication with the at least one processor; wherein,
    所述存储器存储有能被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行权利要求1-9任一所述的图像处理方法。The memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor, so that the at least one processor can execute the image according to any one of claims 1-9. Approach.
  12. 一种非暂态计算机可读存储介质,其中该非暂态计算机可读存储介质存储计算机指令,该计算机指令用于使计算机执行权利要求1-9任一所述的图像处理方法。A non-transitory computer-readable storage medium, wherein the non-transitory computer-readable storage medium stores computer instructions, and the computer instructions are used to cause a computer to execute the image processing method according to any one of claims 1-9.
PCT/CN2019/073069 2018-06-13 2019-01-25 Image processing method, device, electronic device and computer readable storage medium WO2019237743A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810609995.X 2018-06-13
CN201810609995.XA CN108932702B (en) 2018-06-13 2018-06-13 Image processing method, image processing device, electronic equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
WO2019237743A1 true WO2019237743A1 (en) 2019-12-19

Family

ID=64446579

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/073069 WO2019237743A1 (en) 2018-06-13 2019-01-25 Image processing method, device, electronic device and computer readable storage medium

Country Status (2)

Country Link
CN (1) CN108932702B (en)
WO (1) WO2019237743A1 (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108932702B (en) * 2018-06-13 2020-10-09 北京微播视界科技有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN110070494B (en) * 2018-12-21 2021-09-17 北京字节跳动网络技术有限公司 Image processing method and device and electronic equipment
CN109785264B (en) * 2019-01-15 2021-11-16 北京旷视科技有限公司 Image enhancement method and device and electronic equipment
CN113473038A (en) * 2020-03-30 2021-10-01 上海商汤智能科技有限公司 Image processing apparatus, image processing method, and related product
CN112150351A (en) * 2020-09-27 2020-12-29 广州虎牙科技有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN113160357A (en) * 2021-04-07 2021-07-23 浙江工商大学 Information auditing method, system and computer readable storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103310411A (en) * 2012-09-25 2013-09-18 中兴通讯股份有限公司 Image local reinforcement method and device
US20140133776A1 (en) * 2012-11-12 2014-05-15 Marvell World Trade lid. Systems and Methods for Image Enhancement by Local Tone Curve Mapping
CN105303543A (en) * 2015-10-23 2016-02-03 努比亚技术有限公司 Image enhancement method and mobile terminal
CN105654496A (en) * 2016-01-08 2016-06-08 华北理工大学 Visual characteristic-based bionic adaptive fuzzy edge detection method
CN108932702A (en) * 2018-06-13 2018-12-04 北京微播视界科技有限公司 Image processing method, device, electronic equipment and computer readable storage medium

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI314301B (en) * 2006-06-30 2009-09-01 Primax Electronics Ltd Adaptive image sharpening method
CN101452575B (en) * 2008-12-12 2010-07-28 北京航空航天大学 Image self-adapting enhancement method based on neural net
CN101794380B (en) * 2010-02-11 2012-08-08 上海点佰趣信息科技有限公司 Enhancement method of fingerprint image
CN102214357A (en) * 2011-06-22 2011-10-12 王洪剑 Image enhancement method and system
CN104376542B (en) * 2014-10-25 2019-04-23 深圳市金立通信设备有限公司 A kind of image enchancing method
CN107153816B (en) * 2017-04-16 2021-03-23 五邑大学 Data enhancement method for robust face recognition
CN107945163B (en) * 2017-11-23 2020-04-28 广州酷狗计算机科技有限公司 Image enhancement method and device
CN108024103A (en) * 2017-12-01 2018-05-11 重庆贝奥新视野医疗设备有限公司 Image sharpening method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103310411A (en) * 2012-09-25 2013-09-18 中兴通讯股份有限公司 Image local reinforcement method and device
US20140133776A1 (en) * 2012-11-12 2014-05-15 Marvell World Trade lid. Systems and Methods for Image Enhancement by Local Tone Curve Mapping
CN105303543A (en) * 2015-10-23 2016-02-03 努比亚技术有限公司 Image enhancement method and mobile terminal
CN105654496A (en) * 2016-01-08 2016-06-08 华北理工大学 Visual characteristic-based bionic adaptive fuzzy edge detection method
CN108932702A (en) * 2018-06-13 2018-12-04 北京微播视界科技有限公司 Image processing method, device, electronic equipment and computer readable storage medium

Also Published As

Publication number Publication date
CN108932702A (en) 2018-12-04
CN108932702B (en) 2020-10-09

Similar Documents

Publication Publication Date Title
WO2019237743A1 (en) Image processing method, device, electronic device and computer readable storage medium
US10430075B2 (en) Image processing for introducing blurring effects to an image
US11017580B2 (en) Face image processing based on key point detection
WO2020233010A1 (en) Image recognition method and apparatus based on segmentable convolutional network, and computer device
WO2019242271A1 (en) Image warping method and apparatus, and electronic device
EP2945374B1 (en) Positioning of projected augmented reality content
WO2020001014A1 (en) Image beautification method and apparatus, and electronic device
CN108833784B (en) Self-adaptive composition method, mobile terminal and computer readable storage medium
WO2019237745A1 (en) Facial image processing method and apparatus, electronic device and computer readable storage medium
CN108924440B (en) Sticker display method, device, terminal and computer-readable storage medium
CN108898549B (en) Picture processing method, picture processing device and terminal equipment
WO2019237747A1 (en) Image cropping method and apparatus, and electronic device and computer-readable storage medium
WO2021093499A1 (en) Image processing method and apparatus, storage medium, and electronic device
CN112967381B (en) Three-dimensional reconstruction method, apparatus and medium
US11645737B2 (en) Skin map-aided skin smoothing of images using a bilateral filter
CN110503704B (en) Method and device for constructing three-dimensional graph and electronic equipment
AU2021225277B2 (en) Image cropping method and apparatus, and device and storage medium
CN111275824A (en) Surface reconstruction for interactive augmented reality
CN109062471B (en) Mobile display method, device, equipment and medium for page elements
CN110677586B (en) Image display method, image display device and mobile terminal
CN112801882B (en) Image processing method and device, storage medium and electronic equipment
WO2023207741A1 (en) Modeling method for metaverse scene material and related device
CN109739403B (en) Method and apparatus for processing information
CN111045576B (en) Display control method, display control device, terminal equipment and electronic equipment
CN115665347A (en) Video clipping method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19820506

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 01.04.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 19820506

Country of ref document: EP

Kind code of ref document: A1