WO2021031847A1 - 图像处理方法、装置、电子设备和计算机可读存储介质 - Google Patents

图像处理方法、装置、电子设备和计算机可读存储介质 Download PDF

Info

Publication number
WO2021031847A1
WO2021031847A1 PCT/CN2020/106861 CN2020106861W WO2021031847A1 WO 2021031847 A1 WO2021031847 A1 WO 2021031847A1 CN 2020106861 W CN2020106861 W CN 2020106861W WO 2021031847 A1 WO2021031847 A1 WO 2021031847A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
preset
special effect
video
preset target
Prior art date
Application number
PCT/CN2020/106861
Other languages
English (en)
French (fr)
Inventor
王兢业
Original Assignee
北京字节跳动网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字节跳动网络技术有限公司 filed Critical 北京字节跳动网络技术有限公司
Priority to JP2022509655A priority Critical patent/JP7338041B2/ja
Priority to EP20854936.0A priority patent/EP4016993A4/en
Publication of WO2021031847A1 publication Critical patent/WO2021031847A1/zh
Priority to US17/672,529 priority patent/US11516411B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • G09G5/39Control of the bit-mapped memory
    • G09G5/395Arrangements specially adapted for transferring the contents of the bit-mapped memory to the screen
    • G09G5/397Arrangements specially adapted for transferring the contents of two or more bit-mapped memories to the screen simultaneously, e.g. for mixing or overlay
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1407General aspects irrespective of display type, e.g. determination of decimal point position, display with fixed or driving decimal point, suppression of non-significant zeros
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/215Motion-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • G06V10/225Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition based on a marking or identifier characterising the area
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23412Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs for generating or manipulating the scene composition of objects, e.g. MPEG-4 objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2621Cameras specially adapted for the electronic generation of special effects during image pickup, e.g. digital cameras, camcorders, video cameras having integrated special effects capability
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2320/00Control of display operating conditions
    • G09G2320/06Adjustment of display parameters
    • G09G2320/0613The adjustment depending on the type of the information to be displayed
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/12Overlay of images, i.e. displayed pixel being the result of switching between the corresponding input pixels
    • G09G2340/125Overlay of images, i.e. displayed pixel being the result of switching between the corresponding input pixels wherein one of the images is motion video
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2354/00Aspects of interface with display user
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/235Processing of additional data, e.g. scrambling of additional data or processing content descriptors
    • H04N21/2355Processing of additional data, e.g. scrambling of additional data or processing content descriptors involving reformatting operations of additional data, e.g. HTML pages

Definitions

  • the present disclosure relates to the field of image processing technology, and in particular to an image processing method, device, electronic equipment, and computer-readable storage medium.
  • the technical problem solved by the present disclosure is to provide an image processing method to at least partially solve the technical problem that the preset action special effect cannot be realized in the prior art.
  • an image processing device, an image processing hardware device, a computer-readable storage medium, and an image processing terminal are also provided.
  • An image processing method including:
  • An image processing device including:
  • Image acquisition module for acquiring video images
  • the sticker superimposing module is configured to superimpose a foreground sticker on the target image corresponding to the preset target when it is detected that a preset target appears in the video image;
  • the special effect generation module is used to generate screen special effects when it is detected that the preset target in the video image has a preset action.
  • An electronic device including:
  • Memory for storing non-transitory computer readable instructions
  • the processor is configured to run the computer-readable instructions so that the processor implements the image processing method described in any one of the foregoing when executed.
  • a computer-readable storage medium is used to store non-transitory computer-readable instructions.
  • the computer is caused to execute the image processing method described in any one of the above.
  • An image processing terminal includes any image processing device described above.
  • a video image is obtained; when a preset target is detected in the video image, a foreground sticker is superimposed on the target image corresponding to the preset target; when the preset target in the video image is detected When the target has a preset action, a screen special effect is generated, which can realize the screen special effect of the preset action of the preset target.
  • Fig. 1 is a schematic flowchart of an image processing method according to an embodiment of the present disclosure
  • Fig. 2 is a schematic flowchart of an image processing method according to an embodiment of the present disclosure
  • Fig. 3 is a schematic flowchart of an image processing apparatus according to an embodiment of the present disclosure
  • Fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
  • an embodiment of the present disclosure provides an image processing method. As shown in FIG. 1, the image processing method mainly includes the following steps S11 to S13.
  • Step S11 Obtain a video image.
  • the video image may be a video stream input in real time, for example, a live video in a short video application, or a video image pre-stored in the terminal.
  • the terminal may be a mobile terminal, such as a smart phone, a tablet computer, or a fixed terminal, such as a desktop computer.
  • Step S12 When it is detected that a preset target appears in the video image, a foreground sticker is superimposed on the target image corresponding to the preset target.
  • the preset target may be a preset gesture, such as air punch, scissors hand, ok gesture, etc.
  • the foreground sticker is a sticker used as a preset target foreground.
  • the foreground sticker of the preset target can be obtained from the Internet.
  • an existing detection algorithm for example, a neural network-based target detection algorithm, or a region-based target detection algorithm, etc.
  • a neural network-based target detection algorithm for example, a neural network-based target detection algorithm, or a region-based target detection algorithm, etc.
  • a region-based target detection algorithm etc.
  • Step S13 When it is detected that the preset target in the video image has a preset action, a screen special effect is generated.
  • the preset action is an action associated with the preset target.
  • the preset target is an air punch
  • the corresponding preset action is a punch action.
  • the screen special effect may be a screen shaking special effect and/or a screen special effect image.
  • the screen special effect image may be generated based on the foreground sticker, or may be another image unrelated to the foreground sticker. This screen special effect image corresponds to the preset action.
  • a video image is acquired; when a preset target is detected in the video image, a foreground sticker is superimposed on the target image corresponding to the preset target; when the preset in the video image is detected When the target has a preset action, a screen special effect is generated, which can realize the screen special effect of the preset action of the preset target.
  • step S13 specifically includes:
  • a distorted screen special effect image is generated according to the foreground sticker.
  • the screen special effect image may be a longitudinally distorted foreground sticker, or a horizontally distorted foreground sticker, or a longitudinally and horizontally distorted foreground sticker.
  • generating a distorted screen special effect image according to the foreground sticker includes:
  • the offset can be a longitudinal offset, or a lateral offset, or a longitudinal offset and a lateral offset.
  • the determining the offset of each pixel in the foreground sticker when a preset action is detected on the preset target in the video image includes:
  • the offset of each pixel in the foreground sticker is determined according to the at least one parameter.
  • the twist coefficient is used to control the speed of twisting.
  • the compensation coefficient is used to prevent the flicker phenomenon.
  • the determining the offset of each pixel in the foreground sticker according to the at least one parameter includes:
  • x is the abscissa of the pixel point, for example, it can be the normalized abscissa
  • y is the ordinate of the pixel point, for example, it can be the normalized ordinate
  • is the offset amplitude, for example,
  • h is the height of the video image
  • is the distortion coefficient, used to control the speed of distortion, for example, it can be set to 30
  • t is the system time in s
  • is the distortion density, for example, Set to 2.7
  • ⁇ 1 and ⁇ 2 are the compensation coefficients
  • sin(.) is the sine value.
  • the method further includes:
  • Step S14 Perform filter processing on the screen special effect image
  • Step S15 Render the screen special effect image processed by the filter.
  • the foreground sticker, the screen special effect, and the filter are arranged in order from back to front facing the screen direction.
  • the foreground sticker is composed of a first video sequence
  • step S12 specifically includes:
  • step S13 specifically includes:
  • the foreground sticker is also composed of a video sequence.
  • the overlay period of the foreground sticker is consistent with the overlay period of the filter. The period can be specifically determined according to the frame rate of the video image.
  • the filter consists of a second video sequence
  • step S14 specifically includes:
  • Step S141 According to the superimposition progress of the video images, periodically select video images from the second video sequence as filter images in sequence;
  • the filter may adopt the multiply mode, which is composed of a video sequence, and adopts a new video as the filter every 50 milliseconds, and overlaps it cyclically.
  • Step S142 Use the filter image to perform filter processing on the screen special effect image.
  • the method further includes:
  • the period is controlled by the first timer.
  • a timer which is activated every preset time (for example, 50 milliseconds)
  • the method further includes:
  • the method further includes:
  • the foreground sticker, the screen special effect, and the filter are arranged in order from back to front facing the screen direction.
  • the direction facing the screen may be represented by coordinates in a certain direction of the three-dimensional coordinate system, such as the z-axis direction.
  • the coordinates of the foreground sticker, the screen special effect, and the filter are sequentially decreased along the z-axis.
  • the image processing method mainly includes the following steps S21 to S25.
  • Step S21 Obtain a video image.
  • Step S22 When it is detected that a preset target appears in the video image, a timer is started for timing.
  • Step S23 Superimpose the foreground sticker.
  • Step S24 When it is detected that the preset target in the video image has a preset action, a screen special effect image is generated.
  • Step S25 Perform filter processing on the screen special effect image.
  • Step S26 rendering the screen special effect image processed by the filter.
  • a timer is set, and the duration can be 2 seconds.
  • 3 special effects are triggered: overlaying foreground stickers, generating screen effects, and filter effects. And the z-axis heights of the three are sequentially reduced.
  • the filter adopts the multiply mode, which consists of a video sequence.
  • a new video is used as the filter every 50 milliseconds, and the loop overlap is achieved by setting another timer, which is activated every 50 milliseconds once.
  • the foreground sticker is also composed of a video sequence.
  • the same timer is used, that is, it is triggered every 50 milliseconds.
  • Step S27 When the first timer exceeds a preset time, turn off the filter, the screen special effect, and the foreground sticker in sequence.
  • the preset time can be 2s.
  • the device embodiments of the present disclosure can be used to perform the steps implemented by the method embodiments of the present disclosure.
  • an embodiment of the present disclosure provides an image processing device.
  • the device can execute the steps in the image processing method embodiment described in the first embodiment.
  • the device mainly includes: an image acquisition module 31, a sticker overlay module 32, a special effect generation module 33, and an image rendering module 34; among them,
  • the image acquisition module 31 is used to acquire video images
  • the sticker superimposing module 32 is configured to superimpose a foreground sticker on the target image corresponding to the preset target when it is detected that a preset target appears in the video image;
  • the special effect generating module 33 is configured to generate a screen special effect when it is detected that the preset target in the video image has a preset action.
  • the special effect generating module 33 is specifically configured to generate a distorted screen special effect image according to the foreground sticker when it is detected that the predetermined target in the video image has a predetermined action.
  • the special effect generation module 33 includes: an offset determination unit 331 and a special effect generation unit 332; wherein,
  • the offset determination unit 331 is configured to determine the offset of each pixel in the foreground sticker when it is detected that the preset target in the video image has a preset action;
  • the special effect generating unit 332 is configured to move each pixel in the foreground sticker according to the offset to obtain the distorted screen special effect image.
  • the offset determination unit 331 is specifically configured to: when a preset action is detected on the preset target in the video image, obtain the height, offset amplitude, system time, and time of the video image. At least one parameter of the distortion coefficient, the distortion density and the compensation coefficient; and the offset of each pixel in the foreground sticker is determined according to the at least one parameter.
  • the offset determination unit 331 is specifically configured to: adopt a formula Determine the longitudinal offset of each pixel in the foreground sticker; where x is the abscissa of the pixel, y is the ordinate of the pixel, ⁇ is the offset amplitude, ⁇ is the distortion coefficient, and t is the In the system time, ⁇ is the twist density, ⁇ 1 and ⁇ 2 are the compensation coefficients, and sin(.) is the sine value.
  • the device further includes: a filter module 34 and an image rendering module 35; wherein,
  • the filter module 34 is configured to perform filter processing on the screen special effect image
  • the image rendering module 35 is used to render the screen special effect image processed by the filter.
  • the foreground sticker is composed of a first video sequence
  • the sticker superimposing module 32 is specifically configured to: select a first frame of video image from the first video sequence as the foreground sticker;
  • the special effect generation module 33 is specifically configured to: periodically select video images from the remaining videos in the first video sequence as the screen special effect images according to the superimposition progress of the video images.
  • the filter is composed of a second video sequence
  • the filter module 34 is specifically configured to: periodically select video images from the second video sequence as the filter image in sequence according to the superimposition progress of the video images; The screen special effect image is filtered.
  • the device further includes: a first timer module 36; wherein,
  • the first timer module 36 is used to control the period through the first timer.
  • the device further includes: a second timer module 37;
  • the second timer module 37 is configured to start a second timer for timing after detecting that a preset target appears in the video image and before superimposing the foreground sticker on the target image corresponding to the preset target.
  • the second timer module 37 is further configured to turn off the filter, the screen special effect, and the foreground sticker in sequence when the second timer counts more than a preset time.
  • the foreground sticker, the screen special effect, and the filter are arranged in order from back to front facing the screen direction.
  • the terminal devices in the embodiments of the present disclosure may include, but are not limited to, mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablets), PMPs (portable multimedia overlays), vehicle-mounted terminals (for example, Mobile terminals such as car navigation terminals) and fixed terminals such as digital TVs, desktop computers, etc.
  • the electronic device shown in FIG. 4 is only an example, and should not bring any limitation to the function and scope of use of the embodiments of the present disclosure.
  • the electronic device 400 may include a processing device (such as a central processing unit, a graphics processor, etc.) 401, which may be loaded into a random access device according to a program stored in a read-only memory (ROM) 402 or from a storage device 406
  • the programs in the memory (RAM) 403 execute various appropriate actions and processes.
  • the RAM 403 also stores various programs and data required for the operation of the electronic device 400.
  • the processing device 401, ROM 402, and RAM 403 are connected to each other through a bus 404.
  • An input/output (I/O) interface 405 is also connected to the bus 404.
  • the following devices can be connected to the I/O interface 405: including input devices 406 such as touch screens, touch pads, keyboards, mice, cameras, microphones, accelerometers, gyroscopes, etc.; including, for example, liquid crystal displays (LCD), speakers, vibration An output device 407 such as a device; a storage device 406 such as a magnetic tape and a hard disk; and a communication device 409.
  • the communication device 409 may allow the electronic device 400 to perform wireless or wired communication with other devices to exchange data.
  • FIG. 4 shows an electronic device 400 having various devices, it should be understood that it is not required to implement or have all the illustrated devices. It may alternatively be implemented or provided with more or fewer devices.
  • an embodiment of the present disclosure includes a computer program product, which includes a computer program carried on a non-transitory computer readable medium, and the computer program contains program code for executing the method shown in the flowchart.
  • the computer program may be downloaded and installed from the network through the communication device 409, or installed from the storage device 406, or installed from the ROM 402.
  • the processing device 401 When the computer program is executed by the processing device 401, the above-mentioned functions defined in the method of the embodiment of the present disclosure are executed.
  • the aforementioned computer-readable medium in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the two.
  • the computer-readable storage medium may be, for example, but not limited to, an electric, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the above. More specific examples of computer-readable storage media may include, but are not limited to: electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable Programmable read only memory (EPROM or flash memory), optical fiber, portable compact disk read only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program, and the program may be used by or in combination with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in a baseband or as a part of a carrier wave, and a computer-readable program code is carried therein. This propagated data signal can take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • the computer-readable signal medium may also be any computer-readable medium other than the computer-readable storage medium.
  • the computer-readable signal medium may send, propagate, or transmit the program for use by or in combination with the instruction execution system, apparatus, or device .
  • the program code contained on the computer-readable medium can be transmitted by any suitable medium, including but not limited to: wire, optical cable, RF (Radio Frequency), etc., or any suitable combination of the above.
  • the client and server can communicate with any currently known or future developed network protocol, such as HTTP (HyperText Transfer Protocol), and can communicate with digital data in any form or medium.
  • Communication e.g., communication network
  • Examples of communication networks include local area networks (“LAN”), wide area networks (“WAN”), the Internet (e.g., the Internet), and end-to-end networks (e.g., ad hoc end-to-end networks), as well as any currently known or future research and development network of.
  • LAN local area networks
  • WAN wide area networks
  • the Internet e.g., the Internet
  • end-to-end networks e.g., ad hoc end-to-end networks
  • the above-mentioned computer-readable medium may be included in the above-mentioned electronic device; or it may exist alone without being assembled into the electronic device.
  • the above-mentioned computer-readable medium carries one or more programs.
  • the electronic device acquires a video image; when a preset target is detected in the video image, A foreground sticker is superimposed on the target image corresponding to the preset target; when it is detected that the preset target in the video image has a preset action, a screen special effect is generated.
  • the computer program code used to perform the operations of the present disclosure may be written in one or more programming languages or a combination thereof.
  • the above-mentioned programming languages include but are not limited to object-oriented programming languages-such as Java, Smalltalk, C++, and Including conventional procedural programming languages-such as "C" language or similar programming languages.
  • the program code can be executed entirely on the user's computer, partly on the user's computer, executed as an independent software package, partly on the user's computer and partly executed on a remote computer, or entirely executed on the remote computer or server.
  • the remote computer can be connected to the user’s computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (for example, using an Internet service provider to pass Internet connection).
  • LAN local area network
  • WAN wide area network
  • each block in the flowchart or block diagram can represent a module, program segment, or part of code, and the module, program segment, or part of code contains one or more for realizing the specified logical function Executable instructions.
  • the functions marked in the block may also occur in a different order from the order marked in the drawings. For example, two blocks shown in succession can actually be executed substantially in parallel, or they can sometimes be executed in the reverse order, depending on the functions involved.
  • each block in the block diagram and/or flowchart, and the combination of the blocks in the block diagram and/or flowchart can be implemented by a dedicated hardware-based system that performs the specified functions or operations Or it can be realized by a combination of dedicated hardware and computer instructions.
  • the units involved in the embodiments described in the present disclosure may be implemented in a software manner, or may be implemented in a hardware manner. Among them, the name of the unit does not constitute a limitation on the unit itself under certain circumstances.
  • exemplary types of hardware logic components include: Field Programmable Gate Array (FPGA), Application Specific Integrated Circuit (ASIC), Application Specific Standard Product (ASSP), System on Chip (SOC), Complex Programmable Logical device (CPLD) and so on.
  • FPGA Field Programmable Gate Array
  • ASIC Application Specific Integrated Circuit
  • ASSP Application Specific Standard Product
  • SOC System on Chip
  • CPLD Complex Programmable Logical device
  • a machine-readable medium may be a tangible medium, which may contain or store a program for use by or in combination with the instruction execution system, apparatus, or device.
  • the machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
  • the machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or any suitable combination of the foregoing.
  • machine-readable storage media would include electrical connections based on one or more wires, portable computer disks, hard drives, random access memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • RAM random access memory
  • ROM read-only memory
  • EPROM or flash memory erasable programmable read-only memory
  • CD-ROM portable compact disk read only memory
  • magnetic storage device or any suitable combination of the above.
  • an image processing method including:
  • the generating a screen special effect when a preset action is detected on the preset target in the video image includes:
  • a distorted screen special effect image is generated according to the foreground sticker.
  • generating a distorted screen special effect image according to the foreground sticker includes:
  • the determining the offset of each pixel in the foreground sticker when it is detected that the preset target in the video image has a preset action includes:
  • the offset of each pixel in the foreground sticker is determined according to the at least one parameter.
  • the determining the offset of each pixel in the foreground sticker according to the at least one parameter includes:
  • x is the abscissa of the pixel point
  • y is the ordinate of the pixel point
  • is the offset amplitude
  • is the distortion coefficient
  • t is the system time
  • is the distortion density
  • ⁇ 1 and ⁇ 2 Is the compensation coefficient
  • sin(.) is the sine value.
  • the foreground sticker is composed of a first video sequence
  • superimposing a foreground sticker on the target image corresponding to the preset target includes:
  • generating a screen special effect overlay includes:
  • the method further includes:
  • the filter is composed of a second video sequence
  • the performing filter processing on the screen special effect image includes:
  • the method further includes:
  • the period is controlled by the first timer.
  • the method further includes:
  • the method further includes:
  • the spatial coordinates of the foreground sticker, the screen special effect, and the filter facing the screen are sequentially reduced.
  • an image processing apparatus including:
  • Image acquisition module for acquiring video images
  • the sticker superimposing module is configured to superimpose a foreground sticker on the target image corresponding to the preset target when it is detected that a preset target appears in the video image;
  • the special effect generation module is used to generate screen special effects when it is detected that the preset target in the video image has a preset action.
  • the special effect generating module is specifically configured to generate a distorted screen special effect image according to the foreground sticker when it is detected that the preset target in the video image has a preset action.
  • the special effect generation module includes:
  • An offset determination unit configured to determine the offset of each pixel in the foreground sticker when a preset action of the preset target in the video image is detected
  • the special effect generating unit is configured to move each pixel in the foreground sticker according to the offset to obtain the distorted screen special effect image.
  • the offset determination unit is specifically configured to obtain the height, offset amplitude, system time, and distortion of the video image when a preset action of the preset target in the video image is detected. At least one parameter among the coefficient, the distortion density and the compensation coefficient; and the offset of each pixel in the foreground sticker is determined according to the at least one parameter.
  • the offset determination unit is specifically configured to: adopt a formula Determine the vertical offset of each pixel in the foreground sticker; where x is the abscissa of the pixel, y is the ordinate of the pixel, ⁇ is the offset amplitude, ⁇ is the distortion coefficient, and t is the In the system time, ⁇ is the twist density, ⁇ 1 and ⁇ 2 are the compensation coefficients, and sin(.) is the sine value.
  • the device further includes:
  • the filter module is used to perform filter processing on the screen special effect image
  • the image rendering module is used to render the screen special effect image processed by the filter.
  • the foreground sticker is composed of a first video sequence
  • the sticker superimposing module is specifically configured to: select a first frame of video image from the first video sequence as the foreground sticker;
  • the special effect generation module is specifically configured to: periodically select video images as screen special effect images from the remaining videos in the first video sequence according to the superimposition progress of the video images.
  • the filter is composed of a second video sequence
  • the filter unit is specifically configured to: periodically select video images from the second video sequence as the filter image in sequence according to the superimposition progress of the video images; Screen special effects images are filtered.
  • the device further includes:
  • the first timer module is used to control the cycle through the first timer.
  • the device further includes:
  • the second timer module is configured to start a second timer for timing after detecting that a preset target appears in the video image and before superimposing the foreground sticker on the target image corresponding to the preset target.
  • the second timer module is further configured to turn off the filter, the screen special effect, and the foreground sticker in sequence when the second timer counts more than a preset time.
  • the spatial coordinates of the foreground sticker, the screen special effect, and the filter facing the screen are sequentially reduced.
  • an electronic device including:
  • Memory for storing non-transitory computer readable instructions
  • the processor is configured to run the computer-readable instructions so that the processor implements the above-mentioned image processing method when executed.
  • a computer-readable storage medium for storing non-transitory computer-readable instructions.
  • the non-transitory computer-readable instructions are executed by a computer, the The computer executes the above-mentioned image processing method.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Hardware Design (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Studio Devices (AREA)
  • Studio Circuits (AREA)
  • Processing Or Creating Images (AREA)
  • Television Signal Processing For Recording (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

本公开公开了一种图像处理方法、装置、电子设备和计算机可读存储介质。其中方法包括:获取视频图像;在检测到所述视频图像中出现预设目标时,在所述预设目标对应的目标图像上叠加前景贴纸;在检测到所述视频图像中的所述预设目标发生预设动作时,生成屏幕特效。本公开实施例通过获取视频图像;在检测到所述视频图像中出现预设目标时,在所述预设目标对应的目标图像上叠加前景贴纸;在检测到所述视频图像中的所述预设目标发生预设动作时,生成屏幕特效,可以实现预设目标发生的预设动作的屏幕特效。

Description

图像处理方法、装置、电子设备和计算机可读存储介质
相关申请的交叉引用
本申请要求于2019年08月16日提交的,申请号为201910759082.0、发明名称为“图像处理方法、装置、电子设备和计算机可读存储介质”的中国专利申请的优先权,该申请的全文通过引用结合在本申请中。
技术领域
本公开涉及图像处理技术领域,特别是涉及一种图像处理方法、装置、电子设备和计算机可读存储介质。
背景技术
随着智能终端技术的发展,智能终端的功能也越来越多样化,例如,用户可以使用终端进行直播或者短视频拍摄。而在直播或者短视频拍摄中,预设动作特效实现是一个非常有趣的交互娱乐。
目前还没有一种方法能够实现预设动作特效。
发明内容
提供该发明内容部分以便以简要的形式介绍构思,这些构思将在后面的具体实施方式部分被详细描述。该发明内容部分并不旨在标识要求保护的技术方案的关键特征或必要特征,也不旨在用于限制所要求的保护的技术方案的范围。
本公开解决的技术问题是提供一种图像处理方法,以至少部分地解决现有技术中无法实现预设动作特效的技术问题。此外,还提供一种图像处理装置、 图像处理硬件装置、计算机可读存储介质和图像处理终端。
为了实现上述目的,根据本公开的一个方面,提供以下技术方案:
一种图像处理方法,包括:
获取视频图像;
在检测到所述视频图像中出现预设目标时,在所述预设目标对应的目标图像上叠加前景贴纸;
在检测到所述视频图像中的所述预设目标发生预设动作时,生成屏幕特效。
为了实现上述目的,根据本公开的一个方面,提供以下技术方案:
一种图像处理装置,包括:
图像获取模块,用于获取视频图像;
贴纸叠加模块,用于在检测到所述视频图像中出现预设目标时,在所述预设目标对应的目标图像上叠加前景贴纸;
特效生成模块,用于在检测到所述视频图像中的所述预设目标发生预设动作时,生成屏幕特效。
为了实现上述目的,根据本公开的一个方面,提供以下技术方案:
一种电子设备,包括:
存储器,用于存储非暂时性计算机可读指令;以及
处理器,用于运行所述计算机可读指令,使得所述处理器执行时实现上述任一项所述的图像处理方法。
为了实现上述目的,根据本公开的一个方面,提供以下技术方案:
一种计算机可读存储介质,用于存储非暂时性计算机可读指令,当所述非暂时性计算机可读指令由计算机执行时,使得所述计算机执行上述任一项所述的图像处理方法。
为了实现上述目的,根据本公开的又一个方面,还提供以下技术方案:
一种图像处理终端,包括上述任一图像处理装置。
本公开实施例通过获取视频图像;在检测到所述视频图像中出现预设目标时,在所述预设目标对应的目标图像上叠加前景贴纸;在检测到所述视频图像中的所述预设目标发生预设动作时,生成屏幕特效,可以实现预设目标发生的预设动作的屏幕特效。
上述说明仅是本公开技术方案的概述,为了能更清楚了解本公开的技术手段,而可依照说明书的内容予以实施,并且为让本公开的上述和其他目的、特征和优点能够更明显易懂,以下特举较佳实施例,并配合附图,详细说明如下。
附图说明
结合附图并参考以下具体实施方式,本公开各实施例的上述和其他特征、优点及方面将变得更加明显。贯穿附图中,相同或相似的附图标记表示相同或相似的元素。应当理解附图是示意性的,原件和元素不一定按照比例绘制。
图1为根据本公开一个实施例的图像处理方法的流程示意图;
图2为根据本公开一个实施例的图像处理方法的流程示意图;
图3为根据本公开一个实施例的图像处理装置的流程示意图;
图4为根据本公开一个实施例的电子设备的结构示意图。
具体实施方式
下面将参照附图更详细地描述本公开的实施例。虽然附图中显示了本公开的某些实施例,然而应当理解的是,本公开可以通过各种形式来实现,而且不应该被解释为限于这里阐述的实施例,相反提供这些实施例是为了更加透彻和完整地理解本公开。应当理解的是,本公开的附图及实施例仅用于示例性作用,并非用于限制本公开的保护范围。
应当理解,本公开的方法实施方式中记载的各个步骤可以按照不同的顺序 执行,和/或并行执行。此外,方法实施方式可以包括附加的步骤和/或省略执行示出的步骤。本公开的范围在此方面不受限制。
本文使用的术语“包括”及其变形是开放性包括,即“包括但不限于”。术语“基于”是“至少部分地基于”。术语“一个实施例”表示“至少一个实施例”;术语“另一实施例”表示“至少一个另外的实施例”;术语“一些实施例”表示“至少一些实施例”。其他术语的相关定义将在下文描述中给出。
实施例一
为了解决现有技术中无法实现预设动作特效的技术问题,本公开实施例提供一种图像处理方法。如图1所示,该图像处理方法主要包括如下步骤S11至步骤S13。
步骤S11:获取视频图像。
其中,视频图像可以为实时输入的视频流,例如,短视频应用中的直播视频,也可以为预先存储在终端中的视频图像。其中,终端可以为移动终端,例如智能手机、平板电脑,也可以为固定终端,例如台式电脑。
步骤S12:在检测到所述视频图像中出现预设目标时,在所述预设目标对应的目标图像上叠加前景贴纸。
其中,预设目标可以为预设手势,例如空气拳、剪刀手、ok手势等。
其中,前景贴纸为作为预设目标前景的贴纸。具体的,可以从互联网获取预设目标的前景贴纸。
具体的,可以采用现有的检测算法(例如,基于神经网络的目标检测算法、或基于区域的目标检测算法等)对视频图像进行检测,检测到包含预设目标的视频图像。
步骤S13:在检测到所述视频图像中的所述预设目标发生预设动作时,生成屏幕特效。
其中,预设动作为与所述预设目标相关联的动作。例如,当预设目标为空气拳时,对应的预设动作则为出拳动作。
其中,屏幕特效可以为屏幕抖动特效和/或屏幕特效图像。
其中,屏幕特效图像可以根据前景贴纸生成,也可以为与所述前景贴纸无关的其它图像。该屏幕特效图像与预设动作相互应。
本实施例通过获取视频图像;在检测到所述视频图像中出现预设目标时,在所述预设目标对应的目标图像上叠加前景贴纸;在检测到所述视频图像中的所述预设目标发生预设动作时,生成屏幕特效,可以实现预设目标发生的预设动作的屏幕特效。
在一个可选的实施例中,步骤S13具体包括:
在检测到所述视频图像中的所述预设目标发生预设动作时,根据所述前景贴纸生成扭曲的屏幕特效图像。
其中,屏幕特效图像可以为纵向扭曲的前景贴纸、或横向扭曲的前景贴纸、或纵向和横向同时扭曲的前景贴纸。
在一个可选的实施例中,所述在检测到所述视频图像中的所述预设目标发生预设动作时,根据所述前景贴纸生成扭曲的屏幕特效图像,包括:
在检测到所述视频图像中的所述预设目标发生预设动作时,确定所述前景贴纸中每个像素点的偏移量;
根据所述偏移量移动所述前景贴纸中每个像素点,得到所述扭曲的屏幕特效图像。
其中,偏移量可以为纵向偏移量、或横向偏移量、或纵向偏移量和横向偏移量。
在一个可选的实施例中,所述在检测到所述视频图像中的所述预设目标发生预设动作时,确定所述前景贴纸中每个像素点的偏移量,包括:
在检测到所述视频图像中的所述预设目标发生预设动作时,获取所述视频图像的高度、偏移幅度、***时间、扭曲系数、扭曲密度和补偿系数中的至少一个参数;
根据所述至少一个参数确定所述前景贴纸中每个像素点的偏移量。
其中,扭曲系数用于控制扭曲的快慢。
其中,补偿系数用于防止闪屏现象。
在一个可选的实施例中,所述根据所述至少一个参数确定所述前景贴纸中每个像素点的偏移量,包括:
采用公式
Figure PCTCN2020106861-appb-000001
确定所述前景贴纸中每个像素点的纵向偏移量;
其中,x为像素点横坐标,例如,可以为归一化的横坐标,y为像素点纵坐标,例如,可以为归一化的纵坐标;α为所述偏移幅度,例如,
Figure PCTCN2020106861-appb-000002
其中,h为视频图像的高度;β为所述扭曲系数,用于控制扭曲的快慢,例如可以设置为30;t为所述***时间,单位为s;δ为所述扭曲密度,例如,可以设置为2.7;ε 1和ε 2为所述补偿系数,sin(.)为求正弦值。
如果仅使用关于横坐标和时间的三角函数来控制纵向偏移量,那么会导致上下边界的闪屏现象。因此提出了补偿项ε 1和ε 2,计算方法分别为:
Figure PCTCN2020106861-appb-000003
Figure PCTCN2020106861-appb-000004
在一个可选的实施例中,所述方法还包括:
步骤S14:对所述屏幕特效图像进行滤镜处理;
步骤S15:渲染滤镜处理后的屏幕特效图像。
在一个可选的实施例中,所述前景贴纸、所述屏幕特效、所述滤镜面向屏幕方向由后向前依次排列。
在一个可选的实施例中,所述前景贴纸由第一视频序列组成;
相应的,步骤S12具体包括:
从所述第一视频序列中选取第一帧视频图像作为所述前景贴纸;
相应的,步骤S13具体包括:
根据所述视频图像的叠加进度,周期性的从所述第一视频序列中的剩余视频中依次选取视频图像作为屏幕特效图像。
具体的,前景贴纸也是由一个视频序列组成。为了保证屏幕特效的同步,前景贴纸的叠加周期与滤镜的叠加周期一致。其周期具体可以根据视频图像的帧率确定。
在一个可选的实施例中,所述滤镜由第二视频序列组成;
相应的,步骤S14具体包括:
步骤S141:根据所述视频图像的叠加进度,周期性的从所述第二视频序列中依次选取视频图像作为滤镜图像;
具体的,滤镜可以采用正片叠底模式,由一个视频序列组成,每隔50毫秒采用一个新的视频作为滤镜,循环叠加。
步骤S142:采用所述滤镜图像对所述屏幕特效图像进行滤镜处理。
在一个可选的实施例中,所述方法还包括:
通过第一计时器控制所述周期。
具体的,可以通过设置计时器来实现,该计时器每隔预设时间(例如50毫秒)即激活一次
在一个可选的实施例中,所述在检测到所述视频图像中出现预设目标之后,在所述预设目标对应的目标图像上叠加前景贴纸叠加之前,还包括:
开启第二计时器进行计时。
在一个可选的实施例中,所述方法还包括:
在所述第二计时器计时超过预设时间时,依次关闭所述滤镜、所述屏幕特效、及叠加所述前景贴纸。
在一个可选的实施例中,所述前景贴纸、所述屏幕特效、所述滤镜面向屏幕方向由后向前依次排列。
其中,面向屏幕方向可以通过三维坐标系的某一方向上的坐标来表示,例如z轴方向,所述前景贴纸、所述屏幕特效、所述滤镜的坐标沿z轴依次降低。
实施例二
本实施例为一具体实现,用于解释说明本公开,如图2所示,该图像处理方法主要包括如下步骤S21至步骤S25。
步骤S21:获取视频图像。
步骤S22:在检测到所述视频图像中出现预设目标时,开启计时器进行计时。
步骤S23:叠加前景贴纸。
步骤S24:在检测到所述视频图像中的所述预设目标发生预设动作时,生成屏幕特效图像。
步骤S25:对所述屏幕特效图像进行滤镜处理。
步骤S26:渲染滤镜处理后的屏幕特效图像。
其中,在检测到预设目标后,设置一个计时器,时长可以为2秒。在计时器生效的时间段内,触发3个特效:叠加前景贴纸、生成屏幕特效以及滤镜特效。且三者的z轴高度依次降低。
其中,滤镜采用正片叠底模式,由一个视频序列组成,每隔50毫秒采用一个新视频作为滤镜,循环叠加,通过设置另一个计时器来实现的,该计时器每隔50毫秒即激活一次。
前景贴纸也是由一个视频序列组成。为了保证特效的同步,采用同样的计 时器,即每隔50毫秒触发一次。
步骤S27:在所述第一计时器计时超过预设时间时,依次关闭所述滤镜、所述屏幕特效、及所述前景贴纸。
其中,预设时间可以为2s。
本领域技术人员应能理解,在上述各个实施例的基础上,还可以进行明显变型(例如,对所列举的模式进行组合)或等同替换。
在上文中,虽然按照上述的顺序描述了图像处理方法实施例中的各个步骤,本领域技术人员应清楚,本公开实施例中的步骤并不必然按照上述顺序执行,其也可以倒序、并行、交叉等其他顺序执行,而且,在上述步骤的基础上,本领域技术人员也可以再加入其他步骤,这些明显变型或等同替换的方式也应包含在本公开的保护范围之内,在此不再赘述。
下面为本公开装置实施例,本公开装置实施例可用于执行本公开方法实施例实现的步骤,为了便于说明,仅示出了与本公开实施例相关的部分,具体技术细节未揭示的,请参照本公开方法实施例。
实施例三
为了解决现有技术中无法实现预设动作的特效的技术问题,本公开实施例提供一种图像处理装置。该装置可以执行上述实施例一所述的图像处理方法实施例中的步骤。如图3所示,该装置主要包括:图像获取模块31、贴纸叠加模块32、特效生成模块33和图像渲染模块34;其中,
图像获取模块31用于获取视频图像;
贴纸叠加模块32用于在检测到所述视频图像中出现预设目标时,在所述预设目标对应的目标图像上叠加前景贴纸;
特效生成模块33用于在检测到所述视频图像中的所述预设目标发生预设动作时,生成屏幕特效。
进一步的,所述特效生成模块33具体用于:在检测到所述视频图像中的所述预设目标发生预设动作时,根据所述前景贴纸生成扭曲的屏幕特效图像。
进一步的,所述特效生成模块33包括:偏移量确定单元331和特效生成单元332;其中,
偏移量确定单元331用于在检测到所述视频图像中的所述预设目标发生预设动作时,确定所述前景贴纸中每个像素点的偏移量;
特效生成单元332用于根据所述偏移量移动所述前景贴纸中每个像素点,得到所述扭曲的屏幕特效图像。
进一步的,所述偏移量确定单元331具体用于:在检测到所述视频图像中的所述预设目标发生预设动作时,获取所述视频图像的高度、偏移幅度、***时间、扭曲系数、扭曲密度和补偿系数中的至少一个参数;根据所述至少一个参数确定所述前景贴纸中每个像素点的偏移量。
进一步的,所述偏移量确定单元331具体用于:采用公式
Figure PCTCN2020106861-appb-000005
Figure PCTCN2020106861-appb-000006
确定所述前景贴纸中每个像素点的纵向偏移量;其中,x为像素点横坐标,y为像素点纵坐标,α为所述偏移幅度,β为所述扭曲系数,t为所述***时间,δ为所述扭曲密度,ε 1和ε 2为所述补偿系数,sin(.)为求正弦值。
进一步的,所述装置还包括:滤镜模块34和图像渲染模块35;其中,
滤镜模块34用于对所述屏幕特效图像进行滤镜处理;
图像渲染模块35用于渲染滤镜处理后的屏幕特效图像。
进一步的,所述前景贴纸由第一视频序列组成;
相应的,所述贴纸叠加模块32具体用于:从所述第一视频序列中选取第一帧视频图像作为所述前景贴纸;
相应的,所述所述特效生成模块33具体用于:根据所述视频图像的叠加进 度,周期性的从所述第一视频序列中的剩余视频中依次选取视频图像作为屏幕特效图像。
进一步的,所述滤镜由第二视频序列组成;
相应的,所述滤镜模块34具体用于:根据所述视频图像的叠加进度,周期性的从所述第二视频序列中依次选取视频图像作为滤镜图像;采用所述滤镜图像对所述屏幕特效图像进行滤镜处理。
进一步的,所述装置还包括:第一计时器模块36;其中,
第一计时器模块36用于通过第一计时器控制所述周期。
进一步的,所述装置还包括:第二计时器模块37;
第二计时器模块37用于在检测到所述视频图像中出现预设目标之后,在所述预设目标对应的目标图像上叠加前景贴纸叠加之前,开启第二计时器进行计时。
进一步的,所述第二计时器模块37还用于:在所述第二计时器计时超过预设时间时,依次关闭所述滤镜、所述屏幕特效、及所述前景贴纸。
进一步的,所述前景贴纸、所述屏幕特效、所述滤镜面向屏幕方向由后向前依次排列。
有关图像处理装置实施例的工作原理、实现的技术效果等详细说明可以参考前述库存处理方法实施例中的相关说明,在此不再赘述。
实施例四
下面参考图4,其示出了适于用来实现本公开实施例的电子设备400的结构示意图。本公开实施例中的终端设备可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、PDA(个人数字助理)、PAD(平板电脑)、PMP(便携式多媒体叠加器)、车载终端(例如车载导航终端)等等的移动终端以及诸如数字TV、台式计算机等等的固定终端。图4示出的电子设备仅仅是一个示例,不应 对本公开实施例的功能和使用范围带来任何限制。
如图4所示,电子设备400可以包括处理装置(例如中央处理器、图形处理器等)401,其可以根据存储在只读存储器(ROM)402中的程序或者从存储装置406加载到随机访问存储器(RAM)403中的程序而执行各种适当的动作和处理。在RAM 403中,还存储有电子设备400操作所需的各种程序和数据。处理装置401、ROM 402以及RAM 403通过总线404彼此相连。输入/输出(I/O)接口405也连接至总线404。
通常,以下装置可以连接至I/O接口405:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置406;包括例如液晶显示器(LCD)、扬声器、振动器等的输出装置407;包括例如磁带、硬盘等的存储装置406;以及通信装置409。通信装置409可以允许电子设备400与其他设备进行无线或有线通信以交换数据。虽然图4示出了具有各种装置的电子设备400,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在非暂态计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置409从网络上被下载和安装,或者从存储装置406被安装,或者从ROM 402被安装。在该计算机程序被处理装置401执行时,执行本公开实施例的方法中限定的上述功能。
需要说明的是,本公开上述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的***、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机 访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行***、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行***、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、RF(射频)等等,或者上述的任意合适的组合。
在一些实施方式中,客户端、服务器可以利用诸如HTTP(HyperText Transfer Protocol,超文本传输协议)之类的任何当前已知或未来研发的网络协议进行通信,并且可以与任意形式或介质的数字数据通信(例如,通信网络)互连。通信网络的示例包括局域网(“LAN”),广域网(“WAN”),网际网(例如,互联网)以及端对端网络(例如,ad hoc端对端网络),以及任何当前已知或未来研发的网络。
上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。
上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备:获取视频图像;在检测到所述视频图像中出现预设目标时,在所述预设目标对应的目标图像上叠加前景贴纸;在检测到所述视频图像中的所述预设目标发生预设动作时,生成屏幕特效。
可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括但不限于面向对象的程序设计语言— 诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
附图中的流程图和框图,图示了按照本公开各种实施例的***、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的***来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本公开实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,单元的名称在某种情况下并不构成对该单元本身的限定。
本文中以上描述的功能可以至少部分地由一个或多个硬件逻辑部件来执行。例如,非限制性地,可以使用的示范类型的硬件逻辑部件包括:现场可编程门阵列(FPGA)、专用集成电路(ASIC)、专用标准产品(ASSP)、片上***(SOC)、复杂可编程逻辑设备(CPLD)等等。
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行***、装置或设备使用或与指令执行***、装置或设备结合地 使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体***、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的更具体示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或快闪存储器)、光纤、便捷式紧凑盘只读存储器(CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何合适组合。
根据本公开的一个或多个实施例,提供了一种图像处理方法,包括:
获取视频图像;
在检测到所述视频图像中出现预设目标时,在所述预设目标对应的目标图像上叠加前景贴纸;
在检测到所述视频图像中的所述预设目标发生预设动作时,生成屏幕特效。获取视频图像;
进一步的,所述在检测到所述视频图像中的所述预设目标发生预设动作时,生成屏幕特效,包括:
在检测到所述视频图像中的所述预设目标发生预设动作时,根据所述前景贴纸生成扭曲的屏幕特效图像。
进一步的,所述在检测到所述视频图像中的所述预设目标发生预设动作时,根据所述前景贴纸生成扭曲的屏幕特效图像,包括:
在检测到所述视频图像中的所述预设目标发生预设动作时,确定所述前景贴纸中每个像素点的偏移量;
根据所述偏移量移动所述前景贴纸中每个像素点,得到所述扭曲的屏幕特效图像。
进一步的,所述在检测到所述视频图像中的所述预设目标发生预设动作时, 确定所述前景贴纸中每个像素点的偏移量,包括:
在检测到所述视频图像中的所述预设目标发生预设动作时,获取所述视频图像的高度、偏移幅度、***时间、扭曲系数、扭曲密度和补偿系数中的至少一个参数;
根据所述至少一个参数确定所述前景贴纸中每个像素点的偏移量。
进一步的,所述根据所述至少一个参数确定所述前景贴纸中每个像素点的偏移量,包括:
采用公式
Figure PCTCN2020106861-appb-000007
确定所述前景贴纸中每个像素点的纵向偏移量;
其中,x为像素点横坐标,y为像素点纵坐标,α为所述偏移幅度,β为所述扭曲系数,t为所述***时间,δ为所述扭曲密度,ε 1和ε 2为所述补偿系数,sin(.)为求正弦值。
进一步的,所述前景贴纸由第一视频序列组成;
相应的,所述在检测到所述视频图像中出现预设目标时,在所述预设目标对应的目标图像上叠加前景贴纸,包括:
从所述第一视频序列中选取第一帧视频图像作为所述前景贴纸;
相应的,所述在检测到所述视频图像中的所述预设目标发生预设动作时,生成屏幕特效叠加,包括:
根据所述视频图像的叠加进度,周期性的从所述第一视频序列中的剩余视频中依次选取视频图像作为屏幕特效图像。
进一步的,所述方法还包括:
对所述屏幕特效图像进行滤镜处理;
渲染滤镜处理后的屏幕特效图像。
进一步的,所述滤镜由第二视频序列组成;
相应的,所述对所述屏幕特效图像进行滤镜处理,包括:
根据所述视频图像的叠加进度,周期性的从所述第二视频序列中依次选取视频图像作为滤镜图像;
采用所述滤镜图像对所述屏幕特效图像进行滤镜处理。
进一步的,所述方法还包括:
通过第一计时器控制所述周期。
进一步的,所述在检测到所述视频图像中出现预设目标之后,在所述预设目标对应的目标图像上叠加前景贴纸叠加之前,所述方法还包括:
开启第二计时器进行计时。
进一步的,所述方法还包括:
在所述第二计时器计时超过预设时间时,依次关闭所述滤镜、所述屏幕特效、及叠加所述前景贴纸。
进一步的,所述前景贴纸、所述屏幕特效、所述滤镜面向屏幕方向上的空间坐标依次降低。
根据本公开的一个或多个实施例,提供了一种图像处理装置,包括:
图像获取模块,用于获取视频图像;
贴纸叠加模块,用于在检测到所述视频图像中出现预设目标时,在所述预设目标对应的目标图像上叠加前景贴纸;
特效生成模块,用于在检测到所述视频图像中的所述预设目标发生预设动作时,生成屏幕特效。
进一步的,所述特效生成模块具体用于:在检测到所述视频图像中的所述预设目标发生预设动作时,根据所述前景贴纸生成扭曲的屏幕特效图像。
进一步的,所述特效生成模块包括:
偏移量确定单元,用于在检测到所述视频图像中的所述预设目标发生预设动作时,确定所述前景贴纸中每个像素点的偏移量;
特效生成单元,用于根据所述偏移量移动所述前景贴纸中每个像素点,得到所述扭曲的屏幕特效图像。
进一步的,所述偏移量确定单元具体用于:在检测到所述视频图像中的所述预设目标发生预设动作时,获取所述视频图像的高度、偏移幅度、***时间、扭曲系数、扭曲密度和补偿系数中的至少一个参数;根据所述至少一个参数确定所述前景贴纸中每个像素点的偏移量。
进一步的,所述偏移量确定单元具体用于:采用公式
Figure PCTCN2020106861-appb-000008
确定所述前景贴纸中每个像素点的纵向偏移量;其中,x为像素点横坐标,y为像素点纵坐标,α为所述偏移幅度,β为所述扭曲系数,t为所述***时间,δ为所述扭曲密度,ε 1和ε 2为所述补偿系数,sin(.)为求正弦值。
进一步的,所述装置还包括:
滤镜模块,用于对所述屏幕特效图像进行滤镜处理;
图像渲染模块,用于渲染滤镜处理后的屏幕特效图像。
进一步的,所述前景贴纸由第一视频序列组成;
相应的,所述贴纸叠加模块具体用于:从所述第一视频序列中选取第一帧视频图像作为所述前景贴纸;
相应的,所述特效生成模块具体用于:根据所述视频图像的叠加进度,周期性的从所述第一视频序列中的剩余视频中依次选取视频图像作为屏幕特效图像。
进一步的,所述滤镜由第二视频序列组成;
相应的,所述滤镜单元具体用于:根据所述视频图像的叠加进度,周期性的从所述第二视频序列中依次选取视频图像作为滤镜图像;采用所述滤镜图像对所述屏幕特效图像进行滤镜处理。
进一步的,所述装置还包括:
第一计时器模块,用于通过第一计时器控制所述周期。
进一步的,所述装置还包括:
第二计时器模块,用于在检测到所述视频图像中出现预设目标之后,在所述预设目标对应的目标图像上叠加前景贴纸叠加之前,开启第二计时器进行计时。
进一步的,所述第二计时器模块还用于:在所述第二计时器计时超过预设时间时,依次关闭所述滤镜、所述屏幕特效、及所述前景贴纸。
进一步的,所述前景贴纸、所述屏幕特效、所述滤镜面向屏幕方向上的空间坐标依次降低。
根据本公开的一个或多个实施例,提供了一种电子设备,包括:
存储器,用于存储非暂时性计算机可读指令;以及
处理器,用于运行所述计算机可读指令,使得所述处理器执行时实现上述的图像处理方法。
根据本公开的一个或多个实施例,提供了一种计算机可读存储介质,用于存储非暂时性计算机可读指令,当所述非暂时性计算机可读指令由计算机执行时,使得所述计算机执行上述的图像处理方法。
以上描述仅为本公开的较佳实施例以及对所运用技术原理的说明。本领域技术人员应当理解,本公开中所涉及的公开范围,并不限于上述技术特征的特定组合而成的技术方案,同时也应涵盖在不脱离上述公开构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它技术方案。例如上述特征 与本公开中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成的技术方案。
此外,虽然采用特定次序描绘了各操作,但是这不应当理解为要求这些操作以所示出的特定次序或以顺序次序执行来执行。在一定环境下,多任务和并行处理可能是有利的。同样地,虽然在上面论述中包含了若干具体实现细节,但是这些不应当被解释为对本公开的范围的限制。在单独的实施例的上下文中描述的某些特征还可以组合地实现在单个实施例中。相反地,在单个实施例的上下文中描述的各种特征也可以单独地或以任何合适的子组合的方式实现在多个实施例中。
尽管已经采用特定于结构特征和/或方法逻辑动作的语言描述了本主题,但是应当理解所附权利要求书中所限定的主题未必局限于上面描述的特定特征或动作。相反,上面所描述的特定特征和动作仅仅是实现权利要求书的示例形式。

Claims (15)

  1. 一种图像处理方法,其特征在于,包括:
    获取视频图像;
    在检测到所述视频图像中出现预设目标时,在所述预设目标对应的目标图像上叠加前景贴纸;
    在检测到所述视频图像中的所述预设目标发生预设动作时,生成屏幕特效。
  2. 根据权利要求1所述的方法,其特征在于,所述在检测到所述视频图像中的所述预设目标发生预设动作时,生成屏幕特效,包括:
    在检测到所述视频图像中的所述预设目标发生预设动作时,根据所述前景贴纸生成扭曲的屏幕特效图像。
  3. 根据权利要求2所述的方法,其特征在于,所述在检测到所述视频图像中的所述预设目标发生预设动作时,根据所述前景贴纸生成扭曲的屏幕特效图像,包括:
    在检测到所述视频图像中的所述预设目标发生预设动作时,确定所述前景贴纸中每个像素点的偏移量;
    根据所述偏移量移动所述前景贴纸中每个像素点,得到所述扭曲的屏幕特效图像。
  4. 根据权利要求3所述的方法,其特征在于,所述在检测到所述视频图像中的所述预设目标发生预设动作时,确定所述前景贴纸中每个像素点的偏移量,包括:
    在检测到所述视频图像中的所述预设目标发生预设动作时,获取所述视频图像的高度、偏移幅度、***时间、扭曲系数、扭曲密度和补偿系数中的至少一个参数;
    根据所述至少一个参数确定所述前景贴纸中每个像素点的偏移量。
  5. 根据权利要求4所述的方法,其特征在于,所述根据所述至少一个参数确定所述前景贴纸中每个像素点的偏移量,包括:
    采用公式
    Figure PCTCN2020106861-appb-100001
    确定所述前景贴纸中每个像素点的纵向偏移量;
    其中,x为像素点横坐标,y为像素点纵坐标,α为所述偏移幅度,β为所述扭曲系数,t为所述***时间,δ为所述扭曲密度,ε 1和ε 2为所述补偿系数,sin(.)为求正弦值。
  6. 根据权利要求1所述的方法,其特征在于,所述前景贴纸由第一视频序列组成;
    相应的,所述在检测到所述视频图像中出现预设目标时,在所述预设目标对应的目标图像上叠加前景贴纸,包括:
    从所述第一视频序列中选取第一帧视频图像作为所述前景贴纸;
    相应的,所述在检测到所述视频图像中的所述预设目标发生预设动作时,生成屏幕特效叠加,包括:
    根据所述视频图像的叠加进度,周期性的从所述第一视频序列中的剩余视频中依次选取视频图像作为屏幕特效图像。
  7. [根据细则26改正11.08.2020]
    根据权利要求2-6任一项所述的方法,其特征在于,所述方法还包括:
    对所述屏幕特效图像进行滤镜处理;
    渲染滤镜处理后的屏幕特效图像。
  8. [根据细则26改正11.08.2020] 
    8、根据权利要求7所述的方法,其特征在于,所述滤镜由第二视频序列组成;
    相应的,所述对所述屏幕特效图像进行滤镜处理,包括:
    根据所述视频图像的叠加进度,周期性的从所述第二视频序列中依次选取视频图像作为滤镜图像;
    采用所述滤镜图像对所述屏幕特效图像进行滤镜处理。
  9. 根据权利要求6-8任一项所述的方法,其特征在于,所述方法还包括:
    通过第一计时器控制所述周期。
  10. 根据权利要求7所述的方法,其特征在于,所述在检测到所述视频图像中出现预设目标之后,在所述预设目标对应的目标图像上叠加前景贴纸叠加之前,所述方法还包括:
    开启第二计时器进行计时。
  11. 根据权利要求10所述的方法,其特征在于,所述方法还包括:
    在所述第二计时器计时超过预设时间时,依次关闭所述滤镜、所述屏幕特效、及叠加所述前景贴纸。
  12. 根据权利要求6所述的方法,其特征在于,所述前景贴纸、所述屏幕特效、所述滤镜面向屏幕方向由后向前依次排列。
  13. 一种预设动作图像处理装置,其特征在于,包括:
    图像获取模块,用于获取视频图像;
    贴纸叠加模块,用于在检测到所述视频图像中出现预设目标时,在所述预设目标对应的目标图像上叠加前景贴纸;
    特效生成模块,用于在检测到所述视频图像中的所述预设目标发生预设动作时,生成屏幕特效。
  14. 一种电子设备,包括:
    存储器,用于存储非暂时性计算机可读指令;以及
    处理器,用于运行所述计算机可读指令,使得所述处理器执行时实现根据权利要求1-12任一项所述的图像处理方法。
  15. 一种计算机可读存储介质,用于存储非暂时性计算机可读指令,当所 述非暂时性计算机可读指令由计算机执行时,使得所述计算机执行权利要求1-12任一项所述的图像处理方法。
PCT/CN2020/106861 2019-08-16 2020-08-04 图像处理方法、装置、电子设备和计算机可读存储介质 WO2021031847A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP2022509655A JP7338041B2 (ja) 2019-08-16 2020-08-04 画像処理方法、装置、電子機器及びコンピュータ可読型記憶媒体
EP20854936.0A EP4016993A4 (en) 2019-08-16 2020-08-04 IMAGE PROCESSING METHOD AND APPARATUS, ELECTRONIC DEVICE AND COMPUTER READABLE STORAGE MEDIUM
US17/672,529 US11516411B2 (en) 2019-08-16 2022-02-15 Image processing method and apparatus, electronic device and computer-readable storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910759082.0 2019-08-16
CN201910759082.0A CN112396676B (zh) 2019-08-16 2019-08-16 图像处理方法、装置、电子设备和计算机可读存储介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/672,529 Continuation US11516411B2 (en) 2019-08-16 2022-02-15 Image processing method and apparatus, electronic device and computer-readable storage medium

Publications (1)

Publication Number Publication Date
WO2021031847A1 true WO2021031847A1 (zh) 2021-02-25

Family

ID=74601972

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/106861 WO2021031847A1 (zh) 2019-08-16 2020-08-04 图像处理方法、装置、电子设备和计算机可读存储介质

Country Status (5)

Country Link
US (1) US11516411B2 (zh)
EP (1) EP4016993A4 (zh)
JP (1) JP7338041B2 (zh)
CN (1) CN112396676B (zh)
WO (1) WO2021031847A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115278041B (zh) * 2021-04-29 2024-02-27 北京字跳网络技术有限公司 图像处理方法、装置、电子设备以及可读存储介质
CN113923355A (zh) * 2021-09-30 2022-01-11 上海商汤临港智能科技有限公司 一种车辆及图像拍摄方法、装置、设备、存储介质
CN116126182A (zh) * 2022-09-08 2023-05-16 北京字跳网络技术有限公司 特效处理方法、装置、电子设备及存储介质

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102074035A (zh) * 2010-12-29 2011-05-25 拓维信息***股份有限公司 一种基于全景图像扭曲的手机动漫人物创作方法
CN105068748A (zh) * 2015-08-12 2015-11-18 上海影随网络科技有限公司 触屏智能设备的摄像头实时画面中用户界面交互方法
CN106210545A (zh) * 2016-08-22 2016-12-07 北京金山安全软件有限公司 一种视频拍摄方法、装置及电子设备
CN108289180A (zh) * 2018-01-30 2018-07-17 广州市百果园信息技术有限公司 根据肢体动作处理视频的方法、介质和终端装置
CN108712661A (zh) * 2018-05-28 2018-10-26 广州虎牙信息科技有限公司 一种直播视频处理方法、装置、设备及存储介质
CN108833818A (zh) * 2018-06-28 2018-11-16 腾讯科技(深圳)有限公司 视频录制方法、装置、终端及存储介质
CN108986017A (zh) * 2018-06-29 2018-12-11 北京微播视界科技有限公司 图像特效处理方法、装置和计算机可读存储介质
US20190070500A1 (en) * 2017-09-07 2019-03-07 Line Corporation Method and system for providing game based on video call and object recognition

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104967801B (zh) * 2015-02-04 2019-09-17 腾讯科技(深圳)有限公司 一种视频数据处理方法和装置
CN109462776B (zh) * 2018-11-29 2021-08-20 北京字节跳动网络技术有限公司 一种视频特效添加方法、装置、终端设备及存储介质
CN109803165A (zh) * 2019-02-01 2019-05-24 北京达佳互联信息技术有限公司 视频处理的方法、装置、终端及存储介质
CN109889893A (zh) * 2019-04-16 2019-06-14 北京字节跳动网络技术有限公司 视频处理方法、装置及设备

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102074035A (zh) * 2010-12-29 2011-05-25 拓维信息***股份有限公司 一种基于全景图像扭曲的手机动漫人物创作方法
CN105068748A (zh) * 2015-08-12 2015-11-18 上海影随网络科技有限公司 触屏智能设备的摄像头实时画面中用户界面交互方法
CN106210545A (zh) * 2016-08-22 2016-12-07 北京金山安全软件有限公司 一种视频拍摄方法、装置及电子设备
US20190070500A1 (en) * 2017-09-07 2019-03-07 Line Corporation Method and system for providing game based on video call and object recognition
CN108289180A (zh) * 2018-01-30 2018-07-17 广州市百果园信息技术有限公司 根据肢体动作处理视频的方法、介质和终端装置
CN108712661A (zh) * 2018-05-28 2018-10-26 广州虎牙信息科技有限公司 一种直播视频处理方法、装置、设备及存储介质
CN108833818A (zh) * 2018-06-28 2018-11-16 腾讯科技(深圳)有限公司 视频录制方法、装置、终端及存储介质
CN108986017A (zh) * 2018-06-29 2018-12-11 北京微播视界科技有限公司 图像特效处理方法、装置和计算机可读存储介质

Also Published As

Publication number Publication date
CN112396676B (zh) 2024-04-02
JP2022545394A (ja) 2022-10-27
JP7338041B2 (ja) 2023-09-04
EP4016993A4 (en) 2022-08-31
US11516411B2 (en) 2022-11-29
US20220174226A1 (en) 2022-06-02
CN112396676A (zh) 2021-02-23
EP4016993A1 (en) 2022-06-22

Similar Documents

Publication Publication Date Title
WO2021031847A1 (zh) 图像处理方法、装置、电子设备和计算机可读存储介质
JP6894976B2 (ja) 画像円滑性向上方法および装置
US20150193187A1 (en) Method and apparatus for screen sharing
WO2021135626A1 (zh) 菜单项选择方法、装置、可读介质及电子设备
WO2021027631A1 (zh) 图像特效处理方法、装置、电子设备和计算机可读存储介质
US20220159197A1 (en) Image special effect processing method and apparatus, and electronic device and computer readable storage medium
WO2020244553A1 (zh) 字幕越界的处理方法、装置和电子设备
WO2021135864A1 (zh) 图像处理方法及装置
WO2022022689A1 (zh) 交互方法、装置和电子设备
JP2023515607A (ja) 画像特殊効果の処理方法及び装置
WO2023169305A1 (zh) 特效视频生成方法、装置、电子设备及存储介质
CN111310632B (zh) 终端的控制方法、装置、终端和存储介质
WO2021027547A1 (zh) 图像特效处理方法、装置、电子设备和计算机可读存储介质
CN109302563B (zh) 防抖处理方法、装置、存储介质及移动终端
WO2021227953A1 (zh) 图像特效配置方法、图像识别方法、装置及电子设备
CN111833459B (zh) 一种图像处理方法、装置、电子设备及存储介质
CN111352560A (zh) 分屏方法、装置、电子设备和计算机可读存储介质
US11962929B2 (en) Method, apparatus, and device for configuring video special effect, and storage medium
WO2023138441A1 (zh) 视频生成方法、装置、设备及存储介质
US11805219B2 (en) Image special effect processing method and apparatus, electronic device and computer-readable storage medium
WO2023098576A1 (zh) 图像处理方法、装置、设备及介质
CN116596748A (zh) 图像风格化处理方法、装置、设备、存储介质和程序产品
WO2021027632A1 (zh) 图像特效处理方法、装置、电子设备和计算机可读存储介质
WO2021073204A1 (zh) 对象的显示方法、装置、电子设备及计算机可读存储介质
EP4113446A1 (en) Sticker processing method and apparatus

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20854936

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022509655

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2020854936

Country of ref document: EP

Effective date: 20220316