CN114286002A - Image processing circuit, method and device, electronic equipment and chip - Google Patents

Image processing circuit, method and device, electronic equipment and chip Download PDF

Info

Publication number
CN114286002A
CN114286002A CN202111627097.5A CN202111627097A CN114286002A CN 114286002 A CN114286002 A CN 114286002A CN 202111627097 A CN202111627097 A CN 202111627097A CN 114286002 A CN114286002 A CN 114286002A
Authority
CN
China
Prior art keywords
image
video
images
area
image processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111627097.5A
Other languages
Chinese (zh)
Inventor
秦兴兴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202111627097.5A priority Critical patent/CN114286002A/en
Publication of CN114286002A publication Critical patent/CN114286002A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

The application discloses an image processing circuit, an image processing method, an image processing device, electronic equipment and a chip, and belongs to the technical field of electronics. The specific scheme comprises the following steps: the system comprises a main control chip and an image processing chip, wherein the main control chip is connected with the image processing chip; the main control chip is used for acquiring a first video and a first image; the image processing chip is used for fusing the image of the first area in the first image and at least two frames of video images of the first video to obtain a second video.

Description

Image processing circuit, method and device, electronic equipment and chip
Technical Field
The present application belongs to the field of image processing technologies, and in particular, to an image processing circuit, method, apparatus, electronic device, and chip.
Background
The double exposure means that the recorded contents of two or more exposures are overlapped in one picture, thereby achieving the purpose of increasing the image illusion effect.
In the related art, a plurality of photographed pictures can be overlapped by image processing software to achieve the effect of double exposure.
However, current image processing software can only output double-exposure pictures, and the application scenario is limited greatly, and cannot meet the creation requirements of users.
Disclosure of Invention
An object of the embodiments of the present application is to provide an image processing circuit, an image processing method, an image processing apparatus, an electronic device, and a chip, which can enrich an application scenario of a double exposure technology, and improve diversity of a double exposure effect and innovativeness and interest of an image display mode, so as to meet creation requirements of a user.
In a first aspect, an embodiment of the present application provides an image processing circuit, including: the system comprises a main control chip and an image processing chip, wherein the main control chip is connected with the image processing chip; the main control chip is used for acquiring a first video and a first image; the image processing chip is used for fusing the image of the first area in the first image and at least two frames of video images of the first video to obtain a second video.
In a second aspect, an embodiment of the present application provides an image processing apparatus, including the image processing circuit according to the first aspect.
In a third aspect, an embodiment of the present application provides an image processing method, including: the method comprises the steps that a main control chip obtains a first video and a first image; and the image processing chip performs fusion processing on the image of the first area in the first image and at least two frames of video images of the first video to obtain a second video.
In a fourth aspect, the present application provides an electronic device, which includes the image processing circuit according to the first aspect, a processor, and a memory, where the memory stores a program or instructions executable on the processor, and the program or instructions, when executed by the processor, implement the steps of the method according to the third aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement steps performed by an image processing chip in the method according to the first aspect, or to implement steps performed by a main control chip in the method according to the first aspect.
In the embodiment of the application, the main control chip can acquire a first video and a first image; the image processing chip may perform fusion processing on the image of the first region in the first image and at least two frames of video images of the first video to obtain a second video. Through this scheme, because can be with the image fusion of first region in the video frame of first video, consequently, can obtain the second video that contains the double exposure effect, be about to the double exposure technique uses the video scene, so, not only can enrich the application scene of double exposure technique, improve the variety of double exposure effect, can also enrich the display mode of image, improve image display's interest to satisfy user's creation demand.
Drawings
Fig. 1 is a schematic structural diagram of a chip in an electronic device according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of an image processing method provided in an embodiment of the present application;
fig. 3(a) is one of schematic diagrams illustrating an image processing effect of an image processing method according to an embodiment of the present application;
fig. 3(b) is a second schematic diagram illustrating an image processing effect of the image processing method according to the embodiment of the present application;
fig. 3(c) is a third schematic diagram illustrating an image processing effect of the image processing method according to the embodiment of the present application;
fig. 4(a) is a fourth schematic diagram illustrating an image processing effect of the image processing method according to the embodiment of the present application;
fig. 4(b) is a fifth schematic diagram illustrating an image processing effect of the image processing method according to the embodiment of the present application;
FIG. 5 is a hardware diagram of an electronic device provided in an embodiment of the present application;
fig. 6 is a second hardware schematic diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present disclosure.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The following describes in detail an image processing circuit, a method, an apparatus, and an electronic device provided in the embodiments of the present application with reference to the accompanying drawings.
As shown in fig. 1, an embodiment of the present application provides an image processing circuit, including: a main control chip 110 and an image processing chip 120. The main control chip 110 is connected to the image processing chip 120. The main control chip 110 may be used to obtain a first video and a first image. The image processing chip 120 may be configured to perform fusion processing on the image of the first region in the first image and at least two frames of video images of the first video to obtain a second video.
Based on above-mentioned scheme, owing to can be with the image of first region blend into the video frame of first video in, consequently, can obtain the second video that contains the double exposure effect, be about to the double exposure technique uses the video scene, so, not only can enrich the application scene of double exposure technique, improve the variety of double exposure effect, can also enrich the display mode of image, improve image display's interest to satisfy user's creation demand.
Optionally, with continued reference to fig. 1, the main control chip 110 may include: a first output interface 111, a second output interface 112, and an image separation unit 113; the above-described image processing chip 120 may include a first input interface 121, a second input interface 122, and an image synthesizing unit 123. The image separation unit 113 is connected to the first output interface 111 and the second output interface 112, respectively, the first output interface 111 is connected to the first input interface 121, the second output interface 112 is connected to the second input interface 122, and the first input interface 121 and the second input interface 122 are connected to the image synthesis unit 123, respectively.
The image separation unit 113 may be configured to determine a first region from the first image, and extract at least two sub-images from at least two frames of video images of the first video, where the sub-images are images of a second region in each frame of video image, and the second region is a region corresponding to the first region; the first output interface 111 may be used to output a first image, and the first input interface 121 may be used to receive a first image; the second output interface 112 may be configured to output at least two sub-images, and the second input interface 122 may be configured to receive at least two sub-images; the image synthesizing unit 123 may be configured to perform image fusion processing on the first image and each of the at least two sub-images to obtain the second video.
Optionally, the main control chip 110 may further include a preprocessing unit, the preprocessing unit is connected to the image separation unit, after the main control chip 110 obtains the first video and the first image, and before the image separation processing, the main control chip may first perform preprocessing on the first video and the first image through the preprocessing unit, and the preprocessing process may include basic effect processing such as noise reduction.
Illustratively, the image separation unit may transmit the first image and the at least two sub-images to the image processing chip through the MIPI DSI protocol. The first output interface may be MIPI DSI0, the second output interface may be MIPI DSI1, the first input interface may be MIPI DSI RX0, the second input interface may be MIPI DSI RX1, and the image separation unit may be a surfefinger module of a framework.
Based on the scheme, the second video for displaying the sub-images in the first area can be obtained, namely the second video comprises the double exposure effect of the first image and the first video, and the double exposure technology can be applied to the video field, so that the application scenes of the double exposure technology are enriched, the display mode of the images is enriched, and the interestingness of image display is improved.
Optionally, with continued reference to fig. 1, the main control chip 110 may further include a third input interface 114, and the image processing chip 120 may further include an image frame insertion unit 124 and a third output interface 125. The second input interface 122 is connected to the image interpolation unit 124, the image interpolation unit 124 is respectively connected to the image synthesis unit 123 and the third output interface 125, and the third output interface 125 is connected to the third input interface 114.
The second output interface 112 may be configured to transmit the third video to the image interpolation unit 124 through the second input interface 122; the image frame interpolation unit may be configured to perform frame interpolation processing on the third video according to a preset frame rate to generate a first video; the third output interface 125 may be configured to transmit the first video to the main control chip 110 through the third input interface 114; the third video is a video recorded by a first camera, or the third video image is a pre-stored video; the frame rate of the first video is higher than the frame rate of the third video.
Illustratively, the third input interface may be MIPI CSI TX1, and the third output interface may be MIPI CSI RX 1.
Based on the scheme, the frame interpolation processing can be carried out on the third video so as to generate the first video with higher frame rate, therefore, the display effect of the first video can be improved, and preparation is provided for generating the second video with higher display quality.
Optionally, the image synthesizing unit 123 may be further configured to adjust the image transparency of the image in the first region from a first transparency to a second transparency before performing image fusion processing on the first image and each of the at least two sub-images to obtain the second video, where the first transparency is smaller than the second transparency.
Based on the scheme, the image transparency of the image in the first area can be adjusted from the first transparency to the second transparency, and the sub-image displayed in the first area can be clearer because the first transparency is smaller than the second transparency, so that the video display effect of the second video can be improved.
Alternatively, the image synthesizing unit 123 may be specifically configured to replace the pixels in the first region of the first image with the pixels in each sub-image, respectively.
Based on the above-described scheme, since the pixels in the first region of the first image can be replaced with the pixels in each sub-image, the sub-image can be displayed in the first region of the first image, so that the second video having the double exposure effect can be generated.
Optionally, the image separation unit 113 may be configured to extract an image of the first region from the first image, so as to obtain an object image; the first output interface 111 may be used for outputting the object image, and the first input interface 121 may be used for receiving the object image; the second output interface 112 may be configured to output the first video, and the second input interface 122 is configured to receive the first video; the image synthesizing unit 123 may be configured to replace pixels of a third region of each of the at least two frames of video images with pixels in the object image; the third area is an image area determined by user input or an image area determined by performing feature recognition on each frame of video image.
Based on the scheme, since the pixel information of the image of the first area can be added to at least two frames of video images of the first video, the second video containing the image pixel mark of the first area can be obtained. Because the second video comprises the double exposure effect of the first image and the first video, the application scenes of the double exposure technology are enriched, the display modes of the images are enriched, and the interestingness of image display is improved.
Optionally, with continuing reference to fig. 1, the main control chip 110 may further include a fourth input interface 115, and the image processing chip 120 may further include a fourth output interface 126. The main control chip 110 may acquire the third video transmitted by the image sensor through the fourth input interface 115, and transmit the third video to the second output interface 112 through the image separation unit 113. The image processing chip 120 may transmit the second video to the display unit through the fourth output interface 126 for display.
Illustratively, the fourth input interface may be MIPI CSI RX0, and the fourth output interface may be MIPI CSI TX 0.
Based on the scheme, the size of the object image can be reduced and then the object image is used for replacing the pixels of the third area, so that the influence of the object image on other area images outside the third area in at least two frames of video images can be reduced, and the video display effect of the second video is ensured.
As shown in fig. 2, an embodiment of the present application provides an image processing method, which is applied to an image processing apparatus including the image processing circuit shown in fig. 1, the image processing apparatus may further include an image sensor and a display unit, the main control chip may be connected to the image sensor, and the image processing chip may be connected to the display unit. The method may comprise steps 201-202:
step 201, a main control chip acquires a first video and a first image.
If the user wants to obtain a video with a double exposure effect, the electronic device may be triggered to start the double exposure processing mode through an input, and in the case that the electronic device is in the double exposure processing mode, the electronic device may receive a first input of the user, where the first input is used to enable the electronic device to obtain the first video and the first image.
Optionally, the first input may include a first sub-input and a second sub-input, where the first sub-input is used to enable the electronic device to acquire the first video, and the second sub-input is used to enable the electronic device to acquire the first image. The first sub-input and the second sub-input may be touch input, voice input, gesture input, or the like. For example, the touch input may be a click input or a long press input of the user on the first video and the first image, or the like.
Illustratively, the first sub-input is a long-press input, and the second sub-input is a click input. In the event that a first camera of the electronic device is aimed at a first scene, a user may make a long press input to the video recording control, and the electronic device may receive and record a first video in response to the long press input. In the case where the second camera of the electronic device is aligned with the second scene, the user may make a click input to the capture control, and the electronic device may receive and capture the first image in response to the click input.
Optionally, before the main control chip acquires the first video, the main control chip may acquire a third video transmitted by the image sensor and transmit the third video to the image processing chip; then, the image frame interpolation unit of the image processing chip can perform frame interpolation processing on the third video according to a preset frame rate, so as to generate a first video; the third video may be a video recorded by the first camera, or the third video image may be a video stored in advance; the frame rate of the first video is higher than the frame rate of the third video.
Illustratively, the third video is the video recorded by the first camera, and the preset frame rate is 120 fps. Under the condition that the first camera of the electronic equipment is aligned with a shooting object, a user can control the shooting duration of the third video through input, and after the third video is shot, the main control chip can transmit the third video to the image processing chip; then, the image processing chip may generate the third video with the frame rate of 30fps into the first video with the frame rate of 120fps through the frame interpolation processing.
Based on the scheme, the frame interpolation processing can be carried out on the third video so as to generate the first video with higher frame rate, therefore, the display effect of the first video can be improved, and preparation is provided for generating the second video with higher display quality.
Optionally, the first image may be an image captured by a second camera; alternatively, the first image may be a pre-stored image.
Illustratively, the third video is a video recorded by the first camera, and the first image is an image shot by the second camera. The electronic equipment comprising the image processing device can shoot the first image through the first camera under the condition that the third video is recorded through the first camera, and then the electronic equipment can perform frame interpolation processing on the third video to generate the first video and acquire the first video and the first image after receiving the first input of the user.
Step 202, the image processing chip performs fusion processing on the image of the first area in the first image and at least two frames of video images of the first video to obtain a second video.
Optionally, the first area may be an image area of the first image including the shooting subject, or may be any area subjectively defined by the user from the first image, and may specifically be determined according to an actual use situation, which is not limited in this embodiment of the present application.
Optionally, the fusing, by the image processing chip, the image of the first region in the first image and the at least two frames of video images of the first video may include the following two implementation manners:
implementation mode 1
The image separation unit of the main control chip can determine a first area from the first image; extracting at least two sub-images from at least two frames of video images of the first video, wherein the sub-images are images of a second area in each frame of video image, and the second area is an area corresponding to the first area; the image separation unit may then transmit the first image and the at least two sub-images to the image processing chip; the image synthesis unit of the image processing chip can perform image fusion processing on the first image and each of the at least two sub-images to obtain a second video.
Illustratively, the at least two frames of video images are the video frame 31 and the video frame 32. As shown in fig. 3(a), the image separation unit may determine the first region 34 from the first image 33, that is, the region where the subject is located in the first image, and after determining the first region 34, as shown in fig. 3(b), the image separation unit may extract the sub-image 33 from the video frame 31 and the sub-image 34 from the video frame 32 according to the first region 34, and then, the image separation unit may transmit the first image 33, the sub-image 33, and the sub-image 34 to the image processing chip. As shown in fig. 3(c), the image synthesizing unit may perform image fusion processing on the first image 33 and the sub-image 33 to obtain a new video frame 35, and perform image fusion processing on the first image 33 and the sub-image 34 to obtain a new video frame 36. So that a second video comprising a new video frame 35 and a new video frame 36 can be generated.
Based on the scheme, the second video for displaying the sub-images in the first area can be obtained, namely the second video comprises the double exposure effect of the first image and the first video, and the double exposure technology can be applied to the video field, so that the application scenes of the double exposure technology are enriched, the display mode of the images is enriched, and the interestingness of image display is improved.
Alternatively, after the image separation unit determines the first area, the image separation unit may retain images of other areas in the first image except the first area, may delete images of other areas, and may edit pixels of images of other areas. The method can be determined according to actual use conditions, and is not limited in the embodiment of the application.
Optionally, the image synthesizing unit performs image fusion processing on the first image and each of the at least two sub-images to obtain the second video, which may include two embodiments, one embodiment is that the image synthesizing unit performs image fusion processing on the first image and each of the at least two sub-images, for example, taking that the at least two sub-images include the sub-image 1 and the sub-image 2 as an example, the image synthesizing unit may perform image fusion processing on the first image and the sub-image 1, and then perform image fusion processing on the first image and the sub-image 2. In another embodiment, the image synthesizing unit copies the first image according to the number of the sub-images included in the at least two sub-images, and then performs image fusion processing on the copied first image in one-to-one correspondence with the sub-images in the at least two sub-images. For example, taking at least two sub-images including sub-image 1 and sub-image 2 as an example, the image synthesis unit may copy the first image to obtain copied image 1, and then the image synthesis unit may perform image fusion processing on the first image and sub-image 1, and perform image fusion processing on copied image 1 and sub-image 2.
Optionally, before the first image and each of the at least two sub-images are subjected to image fusion processing to obtain the second video, the image synthesis unit may adjust the image transparency of the image in the first region from a first transparency to a second transparency, where the first transparency is smaller than the second transparency.
Based on the scheme, the image transparency of the image in the first area can be adjusted from the first transparency to the second transparency, and the sub-image displayed in the first area can be clearer because the first transparency is smaller than the second transparency, so that the video display effect of the second video can be improved.
Optionally, the image fusion processing of the first image and each of the at least two sub-images by the image processing chip may specifically include: an image composition unit replaces pixels in a first area of the first image with pixels in each sub-image, respectively.
Based on the above-described scheme, since the pixels in the first region of the first image can be replaced with the pixels in each sub-image, the sub-image can be displayed in the first region of the first image, so that the second video having the double exposure effect can be generated.
Implementation mode 2
After acquiring the first video and the first image, the image separation unit may extract an image of the first region from the first image to obtain an object image; transmitting the object image and the first video to an image processing chip; the image synthesizing unit of the image processing chip may replace pixels of a third region of each of the at least two frames of video images of the first video with pixels in the object image; the third area is an image area determined by user input or an image area determined by performing feature recognition on each frame of video image.
Illustratively, the at least two frames of video images are the video frame 41 and the video frame 42. As shown in fig. 4(a), the image separation unit may extract an image of the first region from the first image 43, obtain an object image 44, and transmit the object image 44 and the first video to the image processing chip; as shown in fig. 4(b), the image synthesizing unit of the image processing chip may replace the pixels of the video frame 41 and the third region 45 of the video frame 42 of the first video with the pixels in the object image 44, respectively, to obtain a second video including a new video frame 46 and a new video frame 47.
Based on the scheme, since the pixel information of the image of the first area can be added to at least two frames of video images of the first video, the second video containing the image pixel mark of the first area can be obtained. Because the second video comprises the double exposure effect of the first image and the first video, the application scenes of the double exposure technology are enriched, the display modes of the images are enriched, and the interestingness of image display is improved.
Optionally, the image synthesizing unit may further adjust the object image from a first size to a second size, where the first size is larger than the second size; and replacing the pixels of the third area of each frame of video image in the at least two frames of video images with the pixels of the object image after the size adjustment.
Based on the scheme, the size of the object image can be reduced and then the object image is used for replacing the pixels of the third area, so that the influence of the object image on other area images outside the third area in at least two frames of video images can be reduced, and the video display effect of the second video is ensured.
In the embodiment of the application, because the image in the first area can be merged into the video frame of the first video, the second video with the double exposure effect can be obtained, that is, the double exposure technology is applied to the video scene, so that the application scene of the double exposure technology can be enriched, the diversity of the double exposure effect is improved, the display mode of the image can be enriched, the interest of image display is improved, and the creation requirement of a user is met.
The image processing apparatus in the embodiment of the present application may be an electronic device, or may be a component in an electronic device, such as an integrated circuit or a chip. The electronic device may be a terminal, or may be a device other than a terminal. The electronic Device may be, for example, a Mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic Device, a Mobile Internet Device (MID), an Augmented Reality (AR)/Virtual Reality (VR) Device, a robot, a wearable Device, an ultra-Mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and may also be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine, a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The image processing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android operating system (Android), an iOS operating system, or other possible operating systems, which is not specifically limited in the embodiments of the present application.
The image processing apparatus provided in the embodiment of the present application can implement each process implemented by the method embodiments of fig. 2 to fig. 3, and is not described herein again to avoid repetition.
Optionally, as shown in fig. 5, an electronic device 500 is further provided in an embodiment of the present application, and includes the image processing circuit, a processor 501 and a memory 502, where the memory 502 stores a program or an instruction that can be executed on the processor 501, and when the program or the instruction is executed by the processor 501, the steps of the embodiment of the image processing method are implemented, and the same technical effects can be achieved, and are not repeated here to avoid repetition.
It should be noted that the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 6 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 600 includes, but is not limited to: a radio frequency unit 601, a network module 602, an audio output unit 603, an input unit 604, a sensor 605, a display unit 606, a user input unit 607, an interface unit 608, a memory 609, a processor 610, an image processing chip, and the like.
Those skilled in the art will appreciate that the electronic device 600 may further comprise a power source (e.g., a battery) for supplying power to the various components, and the power source may be logically connected to the processor 610 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The electronic device structure shown in fig. 6 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is omitted here.
In this embodiment, the main control chip may be the processor 610, or the main control chip includes the processor 610, that is, the processor 610 is integrated on the main control chip.
The main control chip is used for acquiring a first video and a first image; the image processing chip is used for fusing the image of the first area in the first image and at least two frames of video images of the first video to obtain a second video.
In the embodiment of the application, because the image in the first area can be merged into the video frame of the first video, the second video with the double exposure effect can be obtained, that is, the double exposure technology is applied to the video scene, so that the application scene of the double exposure technology can be enriched, the diversity of the double exposure effect is improved, the display mode of the image can be enriched, the interest of image display is improved, and the creation requirement of a user is met.
Optionally, the main control chip includes an image separation unit; the image processing chip includes an image synthesizing unit; the interface unit 608 includes a first output interface, a second output interface, a first input interface, and a second input interface. The image separation unit is respectively connected with the first output interface and the second output interface, the first output interface is connected with the first input interface, the second output interface is connected with the second input interface, and the first input interface and the second input interface are respectively connected with the image synthesis unit;
the image separation unit is used for determining a first area from the first image and extracting at least two sub-images from at least two frames of video images of the first video, wherein the sub-images are images of a second area in each frame of video image, and the second area is an area corresponding to the first area; the first output interface is used for outputting the first image, and the first input interface is used for receiving the first image; the second output interface is used for outputting the at least two sub-images, and the second input interface is used for receiving the at least two sub-images; the image synthesis unit is used for carrying out image fusion processing on the first image and each of the at least two sub-images to obtain a second video.
In the embodiment of the application, the second video for displaying the sub-image in the first area can be obtained, namely the second video comprises the double exposure effect of the first image and the first video, and the double exposure technology can be applied to the video field, so that the application scenes of the double exposure technology are enriched, the display mode of the image is enriched, and the interestingness of image display is improved.
Optionally, the interface unit 608 further includes a third input interface, a third output interface; the image processing chip also comprises an image frame inserting unit; the second input interface is connected with the image frame insertion unit, the image frame insertion unit is respectively connected with the image synthesis unit and the third output interface, and the third output interface is connected with the third input interface.
The image frame interpolation unit is used for performing frame interpolation processing on the third video according to a preset frame rate to generate a first video; the third output interface is used for transmitting the first video to the main control chip through the third input interface; the third video is a video recorded by a first camera, or the third video image is a pre-stored video; the frame rate of the first video is higher than the frame rate of the third video.
In the embodiment of the present application, since the frame interpolation processing can be performed on the third video to generate the first video with a higher frame rate, the display effect of the first video can be improved, and preparation can be provided for generating the second video with higher display quality.
Optionally, the image synthesizing unit is further configured to perform image fusion processing on the first image and each of the at least two sub-images, and adjust the image transparency of the image in the first region from a first transparency to a second transparency before obtaining the second video, where the first transparency is smaller than the second transparency.
In the embodiment of the application, the image transparency of the image in the first area can be adjusted from the first transparency to the second transparency, and since the first transparency is smaller than the second transparency, the sub-image displayed in the first area can be clearer, so that the video display effect of the second video can be improved.
Optionally, the image synthesizing unit is specifically configured to replace pixels in the first region of the first image with pixels in each sub-image, respectively.
In the embodiment of the present application, since the pixels in the first region of the first image can be replaced with the pixels in each sub-image, the sub-image can be displayed in the first region of the first image, so that the second video having the double exposure effect can be generated.
Optionally, the image separation unit is configured to extract an image of the first region from the first image to obtain an object image; the first output interface is used for outputting the object image, and the first input interface is used for receiving the object image; the second output interface is used for outputting the first video, and the second input interface is used for receiving the first video; the image synthesis unit is used for replacing pixels of a third area of each frame of video image in the at least two frames of video images with pixels in the object image; the third area is an image area determined by user input or an image area determined by performing feature recognition on each frame of video image.
In the embodiment of the present application, since the pixel information of the image of the first region can be added to at least two frames of video images of the first video, the second video including the image pixel mark of the first region can be obtained. Because the second video comprises the double exposure effect of the first image and the first video, the application scenes of the double exposure technology are enriched, the display modes of the images are enriched, and the interestingness of image display is improved.
Optionally, the image synthesizing unit is specifically configured to adjust the object image from a first size to a second size, where the first size is larger than the second size; and replacing pixels of a third area of each frame of video image in the at least two frames of video images with pixels of the object image after the size adjustment.
In the embodiment of the application, since the size of the object image can be reduced first and then used for replacing the pixels of the third area, the influence of the object image on the images of other areas outside the third area in the at least two frames of video images can be reduced, thereby ensuring the video display effect of the second video.
It is to be understood that, in the embodiment of the present application, the input Unit 604 may include a Graphics Processing Unit (GPU) 6041 and a microphone 6042, and the Graphics Processing Unit 6041 processes image data of a still picture or a video obtained by an image capturing apparatus (such as a camera) in a video capturing mode or an image capturing mode. The display unit 606 may include a display panel 6061, and the display panel 6061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 607 includes at least one of a touch panel 6071 and other input devices 6072. A touch panel 6071, also referred to as a touch screen. The touch panel 6071 may include two parts of a touch detection device and a touch controller. Other input devices 6072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
The memory 609 may be used to store software programs as well as various data. The memory 609 may mainly include a first storage area storing a program or an instruction and a second storage area storing data, wherein the first storage area may store an operating system, an application program or an instruction (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like. Further, the memory 609 may include volatile memory or nonvolatile memory, or the memory 609 may include both volatile and nonvolatile memory. The non-volatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. The volatile Memory may be a Random Access Memory (RAM), a Static Random Access Memory (Static RAM, SRAM), a Dynamic Random Access Memory (Dynamic RAM, DRAM), a Synchronous Dynamic Random Access Memory (Synchronous DRAM, SDRAM), a Double Data Rate Synchronous Dynamic Random Access Memory (Double Data Rate SDRAM, ddr SDRAM), an Enhanced Synchronous SDRAM (ESDRAM), a Synchronous Link DRAM (SLDRAM), and a Direct Memory bus RAM (DRRAM). The memory 609 in the embodiments of the subject application include, but are not limited to, these and any other suitable types of memory.
Processor 610 may include one or more processing units; optionally, the processor 610 integrates an application processor, which mainly handles operations related to the operating system, user interface, application programs, etc., and a modem processor, which mainly handles wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into the processor 610.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a computer read only memory ROM, a random access memory RAM, a magnetic or optical disk, and the like.
An embodiment of the present application further provides a processing chip, where the processing chip includes a processor and a communication interface, the communication interface is coupled to the processor, the communication interface is used to transmit image data, and the processor is used to run a program or an instruction, so as to implement the steps executed by the image processing chip in the video sharing method. And the same technical effect can be achieved, and in order to avoid repetition, the description is omitted.
An embodiment of the present application further provides a control chip, where the control chip includes a processor and a communication interface, the communication interface is coupled to the processor, the communication interface is used to transmit image data, and the processor is used to run a program or an instruction, so as to implement the steps executed by the main control chip in the video sharing method. And the same technical effect can be achieved, and in order to avoid repetition, the description is omitted.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a computer software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (20)

1. An image processing circuit is characterized by comprising a main control chip and an image processing chip, wherein the main control chip is connected with the image processing chip;
the main control chip is used for acquiring a first video and a first image;
the image processing chip is used for fusing the image of the first area in the first image and at least two frames of video images of the first video to obtain a second video.
2. The image processing circuit of claim 1, wherein the main control chip comprises a first output interface, a second output interface, and an image separation unit; the image processing chip comprises a first input interface, a second input interface and an image synthesis unit;
the image separation unit is respectively connected with the first output interface and the second output interface, the first output interface is connected with the first input interface, the second output interface is connected with the second input interface, and the first input interface and the second input interface are respectively connected with the image synthesis unit;
the image separation unit is used for determining a first area from the first image and extracting at least two sub-images from at least two frames of video images of the first video, wherein the sub-images are images of a second area in each frame of video image, and the second area is an area corresponding to the first area;
the first output interface is used for outputting the first image, and the first input interface is used for receiving the first image; the second output interface is used for outputting the at least two sub-images, and the second input interface is used for receiving the at least two sub-images;
the image synthesis unit is used for carrying out image fusion processing on the first image and each of the at least two sub-images to obtain a second video.
3. The image processing circuit of claim 2, wherein the main control chip further comprises a third input interface, the image processing chip further comprises an image frame insertion unit, a third output interface;
the second input interface is connected with the image frame insertion unit, the image frame insertion unit is respectively connected with the image synthesis unit and the third output interface, and the third output interface is connected with the third input interface;
the second output interface is used for transmitting a third video to the image frame interpolation unit through the second input interface;
the image frame interpolation unit is used for performing frame interpolation processing on the third video according to a preset frame rate to generate a first video;
the third output interface is used for transmitting the first video to the main control chip through the third input interface;
the third video is a video recorded by a first camera, or the third video image is a pre-stored video; the frame rate of the first video is higher than the frame rate of the third video.
4. The circuit according to claim 2, wherein the image synthesizing unit is further configured to perform image fusion processing on the first image and each of the at least two sub-images, and adjust the image transparency of the image in the first region from a first transparency to a second transparency before obtaining the second video, where the first transparency is smaller than the second transparency.
5. The image processing circuit according to claim 2, wherein the image synthesis unit is specifically configured to replace pixels in the first area of the first image with pixels in each sub-image, respectively.
6. The image processing circuit of claim 1, wherein the main control chip comprises a first output interface, a second output interface, and an image separation unit; the image processing chip comprises a first input interface, a second input interface and an image synthesis unit;
the image separation unit is respectively connected with the first output interface and the second output interface, the first output interface is connected with the first input interface, the second output interface is connected with the second input interface, and the first input interface and the second input interface are respectively connected with the image synthesis unit;
the image separation unit is used for extracting the image of the first area from the first image to obtain an object image;
the first output interface is used for outputting the object image, and the first input interface is used for receiving the object image; the second output interface is used for outputting the first video, and the second input interface is used for receiving the first video;
the image synthesis unit is used for replacing pixels of a third area of each frame of video image in the at least two frames of video images with pixels in the object image;
the third area is an image area determined by user input or an image area determined by performing feature recognition on each frame of video image.
7. The image processing circuit of claim 6, wherein the image synthesis unit is specifically configured to resize the object image from a first size to a second size, the first size being larger than the second size; and replacing pixels of a third area of each frame of video image in the at least two frames of video images with pixels of the object image after the size adjustment.
8. An image processing apparatus characterized by comprising the image processing circuit according to any one of claims 1 to 7.
9. The image processing device according to claim 8, further comprising an image sensor, wherein the main control chip is connected to the image sensor.
10. An image processing method applied to the image processing apparatus according to claim 8 or 9, comprising:
the method comprises the steps that a main control chip obtains a first video and a first image;
and the image processing chip performs fusion processing on the image of the first area in the first image and at least two frames of video images of the first video to obtain a second video.
11. The image processing method according to claim 10, wherein before the image processing chip performs fusion processing on the image of the first region in the first image and at least two frames of video images of the first video to obtain the second video, the method further comprises:
an image separation unit of the main control chip determines a first area from the first image;
an image separation unit of the main control chip extracts at least two sub-images from at least two frames of video images of the first video, wherein the sub-images are images of a second area in each frame of video image, and the second area is an area corresponding to the first area;
the image separation unit of the main control chip transmits the first image and the at least two sub-images to an image processing chip;
the image processing chip performs fusion processing on the image of the first area in the first image and at least two frames of video images of the first video to obtain a second video, and the fusion processing comprises the following steps:
and the image synthesis unit of the image processing chip carries out image fusion processing on the first image and each of the at least two sub-images respectively to obtain a second video.
12. The image processing method according to claim 11, wherein before the main control chip acquires the first video image and the first image, the method further comprises:
the main control chip transmits the third video to the image processing chip;
an image frame interpolation unit of the image processing chip performs frame interpolation processing on the third video according to a preset frame rate to generate a first video;
the third video is a video recorded by a first camera, or the third video image is a pre-stored video; the frame rate of the first video is higher than the frame rate of the third video.
13. The image processing method according to claim 11, wherein before the image processing chip performs image fusion processing on the first image and each of the at least two sub-images to obtain the second video, the method further comprises:
the image synthesis unit adjusts the image transparency of the image of the first area from a first transparency to a second transparency, and the first transparency is smaller than the second transparency.
14. The image processing method according to claim 11, wherein the image processing chip performs image fusion processing on the first image and each of the at least two sub-images, and the image fusion processing includes:
the image synthesizing unit replaces pixels in the first area of the first image with pixels in each sub-image, respectively.
15. The image processing method according to claim 10, wherein before the image processing chip performs fusion processing on the image of the first region in the first image and at least two frames of video images of the first video to obtain the second video, the method further comprises:
the image separation unit of the main control chip extracts the image of the first area from the first image to obtain an object image;
the image separation unit of the main control chip transmits the object image and the first video to an image processing chip;
the image processing chip performs fusion processing on the image of the first area in the first image and at least two frames of video images of the first video to obtain a second video, and the fusion processing comprises the following steps:
an image synthesis unit of the image processing chip replaces pixels of a third area of each frame of video image in the at least two frames of video images with pixels in the object image;
the third area is an image area determined by user input or an image area determined by performing feature recognition on each frame of video image.
16. The image processing method according to claim 15, wherein the image synthesizing unit adjusts the object image from a first size to a second size, the first size being larger than the second size; and replacing pixels of a third area of each frame of video image in the at least two frames of video images with pixels of the object image after the size adjustment.
17. The image processing method according to any one of claims 10 to 16, wherein the first image is an image taken by a second camera, or wherein the first image is a pre-stored image.
18. An electronic device comprising the image processing circuitry of any of claims 1-7, a processor and a memory, the memory storing a program or instructions executable on the processor, the program or instructions when executed by the processor implementing the steps of the image processing method of any of claims 10-16.
19. A processing chip, characterized in that the processor chip comprises a processor and a communication interface, the communication interface is coupled with the processor, the communication interface is used for transmitting image data, the processor is used for executing programs or instructions to realize the steps executed by the image processing chip in the image processing method according to any one of claims 10 to 16.
20. A control chip, wherein the processor chip comprises a processor and a communication interface, the communication interface is coupled to the processor, the communication interface is used for transmitting image data, and the processor is used for executing a program or instructions to implement the steps executed by the main control chip in the image processing method according to any one of claims 10 to 16.
CN202111627097.5A 2021-12-28 2021-12-28 Image processing circuit, method and device, electronic equipment and chip Pending CN114286002A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111627097.5A CN114286002A (en) 2021-12-28 2021-12-28 Image processing circuit, method and device, electronic equipment and chip

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111627097.5A CN114286002A (en) 2021-12-28 2021-12-28 Image processing circuit, method and device, electronic equipment and chip

Publications (1)

Publication Number Publication Date
CN114286002A true CN114286002A (en) 2022-04-05

Family

ID=80877033

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111627097.5A Pending CN114286002A (en) 2021-12-28 2021-12-28 Image processing circuit, method and device, electronic equipment and chip

Country Status (1)

Country Link
CN (1) CN114286002A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111107267A (en) * 2019-12-30 2020-05-05 广州华多网络科技有限公司 Image processing method, device, equipment and storage medium
CN112135049A (en) * 2020-09-24 2020-12-25 维沃移动通信有限公司 Image processing method and device and electronic equipment
CN112887583A (en) * 2019-11-30 2021-06-01 华为技术有限公司 Shooting method and electronic equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112887583A (en) * 2019-11-30 2021-06-01 华为技术有限公司 Shooting method and electronic equipment
CN111107267A (en) * 2019-12-30 2020-05-05 广州华多网络科技有限公司 Image processing method, device, equipment and storage medium
CN112135049A (en) * 2020-09-24 2020-12-25 维沃移动通信有限公司 Image processing method and device and electronic equipment

Similar Documents

Publication Publication Date Title
WO2022111730A1 (en) Image processing method and apparatus, and electronic device
CN113014801B (en) Video recording method, video recording device, electronic equipment and medium
WO2023125657A1 (en) Image processing method and apparatus, and electronic device
CN111327823A (en) Video generation method and device and corresponding storage medium
CN110086998B (en) Shooting method and terminal
EP4254934A1 (en) Image processing method and apparatus, and electronic device
CN113794831B (en) Video shooting method, device, electronic equipment and medium
CN116419049A (en) Image processing method, image processing system, device and electronic equipment
CN111818382B (en) Screen recording method and device and electronic equipment
WO2023125316A1 (en) Video processing method and apparatus, electronic device, and medium
WO2022247766A1 (en) Image processing method and apparatus, and electronic device
CN113852757B (en) Video processing method, device, equipment and storage medium
CN114125297B (en) Video shooting method, device, electronic equipment and storage medium
CN114338874A (en) Image display method of electronic device, image processing circuit and electronic device
CN112738399B (en) Image processing method and device and electronic equipment
CN114339071A (en) Image processing circuit, image processing method and electronic device
CN115278047A (en) Shooting method, shooting device, electronic equipment and storage medium
CN114286002A (en) Image processing circuit, method and device, electronic equipment and chip
CN114285957A (en) Image processing circuit and data transmission method
CN114615426A (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN113596329A (en) Photographing method and photographing apparatus
CN112887515A (en) Video generation method and device
CN113923367B (en) Shooting method and shooting device
CN112367562B (en) Image processing method and device and electronic equipment
CN115633251A (en) Image processing method, circuit and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination