CN116723307A - Image processing method, image processing device and electronic equipment - Google Patents

Image processing method, image processing device and electronic equipment Download PDF

Info

Publication number
CN116723307A
CN116723307A CN202310791091.4A CN202310791091A CN116723307A CN 116723307 A CN116723307 A CN 116723307A CN 202310791091 A CN202310791091 A CN 202310791091A CN 116723307 A CN116723307 A CN 116723307A
Authority
CN
China
Prior art keywords
image
content
target
offset
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310791091.4A
Other languages
Chinese (zh)
Inventor
刘宇轩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202310791091.4A priority Critical patent/CN116723307A/en
Publication of CN116723307A publication Critical patent/CN116723307A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/327Calibration thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/398Synchronisation thereof; Control thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N2013/0074Stereoscopic image analysis

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses an image processing method, an image processing device and electronic equipment, and belongs to the technical field of image processing. The method comprises the following steps: determining an image offset between a first 3D image and a second 3D image in a case where display contents are switched between the first 3D image and the second 3D image, the first 3D image and the second 3D image being images of consecutive viewing angles; under the condition that the image offset meets a preset condition, determining target image content corresponding to a visual focus of a user in the first 3D image or the second 3D image; and adjusting display parameters of the first 3D image or the second 3D image based on the target image content to obtain a third 3D image and displaying the third 3D image, wherein the stereoscopic effect of the target image content in the third 3D image is stronger than that of the image content except the target image content.

Description

Image processing method, image processing device and electronic equipment
Technical Field
The application belongs to the technical field of image processing, and particularly relates to an image processing method, an image processing device and electronic equipment.
Background
With the development of three-dimensional (3D) image display technology in mobile terminals, more and more applications use 3D technology to present stereoscopic images, such as large-scale hand-tour or navigation map.
The stereoscopic effect of the 3D image may bring immersive experience to a part of users, but may also cause a problem that the users feel dizziness.
Disclosure of Invention
The embodiment of the application aims to provide an image processing method, an image processing device and electronic equipment, which can solve the problem that dizziness is generated when a user views a 3D image.
In a first aspect, an embodiment of the present application provides an image processing method, including:
determining an image offset between a first 3D image and a second 3D image in a case where display contents are switched between the first 3D image and the second 3D image, the first 3D image and the second 3D image being images of consecutive viewing angles;
under the condition that the image offset meets a preset condition, determining target image content corresponding to a visual focus of a user in the first 3D image or the second 3D image;
and adjusting display parameters of the first 3D image or the second 3D image based on the target image content to obtain a third 3D image and displaying the third 3D image, wherein the stereoscopic effect of the target image content in the third 3D image is stronger than that of the image content except the target image content.
In a second aspect, an embodiment of the present application provides an image processing apparatus, including:
a first determining module for determining an image offset between a first 3D image and a second 3D image in a case where display contents are switched between the first 3D image and the second 3D image, the first 3D image and the second 3D image being images of consecutive viewing angles;
the second determining module is used for determining target image content corresponding to the visual focus of the user in the first 3D image or the second 3D image under the condition that the image offset meets a preset condition;
and the image processing module is used for adjusting the display parameters of the first 3D image or the second 3D image based on the target image content to obtain a third 3D image and displaying the third 3D image, wherein the stereoscopic vision effect of the target image content in the third 3D image is stronger than that of the image content except the target image content.
In a third aspect, an embodiment of the present application provides an electronic device comprising a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps of the method as described in the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium having stored thereon a program or instructions which when executed by a processor perform the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and where the processor is configured to execute a program or instructions to implement a method according to the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product stored in a storage medium, the program product being executable by at least one processor to implement the method according to the first aspect.
In the embodiment of the application, under the condition that the display content is switched between the first 3D image and the second 3D image, the image offset between the first 3D image and the second 3D image is determined, so that whether the 3D image is in a state of easily enabling a user to generate dizziness or not can be judged; under the condition that the image offset meets the preset condition, determining target image content corresponding to the visual focus of the user in the first 3D image or the second 3D image, performing image processing on the first 3D image or the second 3D image to obtain a third 3D image and displaying the third 3D image, wherein the stereoscopic visual effect of the target image content in the displayed third 3D image is stronger than that of the image content except the target image content, so that the substitution feeling of the user and the sense of reality of the 3D image are reduced, the dizziness feeling of the user caused by the 3D image can be reduced, and the target image content in the visual focus of the user can keep good stereoscopic visual effect.
Drawings
Fig. 1 is a schematic flow chart of an image processing method according to an embodiment of the present application;
fig. 2 is a schematic structural view of an image processing apparatus according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 4 is a schematic hardware diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions of the embodiments of the present application will be clearly described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which are obtained by a person skilled in the art based on the embodiments of the present application, fall within the scope of protection of the present application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged, as appropriate, such that embodiments of the present application may be implemented in sequences other than those illustrated or described herein, and that the objects identified by "first," "second," etc. are generally of a type, and are not limited to the number of objects, such as the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
The image processing method, the image processing device, the electronic equipment and the medium provided by the embodiment of the application are described in detail below through specific embodiments and application scenes thereof with reference to the accompanying drawings.
The execution subject of the image processing method provided by the embodiment of the application may be a terminal or a functional module or a functional entity in the terminal capable of implementing the image processing method, and the terminal provided by the embodiment of the application includes, but is not limited to, a mobile phone, a tablet computer, a camera, a wearable device, and the like.
In the related art, the 3D dizziness is generated due to inconsistent signals received by the brain caused by the human vision, the incompatibility between the vestibule and the brain, and the like. The optic nerve senses a large-amplitude high-frequency motion in the 3D image, but other motion sensing organs do not sense the motion, and the two opposite signals make the nerve center difficult to judge and cause dizziness.
The 3D vertigo can be reduced from both developer and user approaches:
firstly, the developer can choose to reduce the reality of the picture, prevent the picture from shaking, reduce the rotation speed of the visual angle of the picture and the like to avoid the 3D dizziness caused by the user, and the developer does the 3D dizziness prevention function, so that the development cost is increased, the picture quality is not increased, and even the 3D image distortion caused by the measures affects the experience of the normal user, so that the development will of the 3D dizziness prevention function is not high.
Secondly, the user can select modes such as resolution reduction, frame rate reduction or visual field range change to reduce 3D dizziness, and the use cost of the user is increased due to the additional operation of the user, so that the whole stereoscopic vision effect of the 3D image is poor.
The application provides an image processing method for a terminal system layer, which does not need a developer to perform 3D dizziness prevention function adaptation, does not need a user to perform additional operation, automatically performs image processing on a 3D image, and can reduce the dizziness possibly generated by the user on the 3D image.
As shown in fig. 1, the image processing method includes: step 110, step 120 and step 130.
Step 110, determining an image offset between a first 3D image and a second 3D image, wherein the first 3D image and the second 3D image are images with continuous visual angles, when the display content is switched between the first 3D image and the second 3D image;
the display content can be switched from the first 3D image to the second 3D image, can be switched from the second 3D image to the first 3D image, can be switched from the first 3D image to the second 3D image, and can be switched from the second 3D image to the first 3D image to form back and forth switching.
Step 120, determining target image content corresponding to a visual focus of a user in the first 3D image or the second 3D image under the condition that the image offset meets a preset condition;
If the user currently looks at the first 3D image, determining target image content corresponding to the visual focus of the user in the first 3D image; and if the user currently looks at the second 3D image, determining target image content corresponding to the visual focus of the user in the second 3D image.
And 130, adjusting display parameters of the first 3D image or the second 3D image based on the target image content to obtain a third 3D image and displaying the third 3D image, wherein the stereoscopic effect of the target image content in the third 3D image is stronger than that of the image content except the target image content.
Based on 120, if the target image content corresponding to the visual focus of the user is determined in the first 3D image, adjusting the display parameters of the first 3D image; and if the target image content corresponding to the visual focus of the user is determined in the second 3D image, adjusting the display parameters of the second 3D image.
Wherein the 3D image is an image within the visual range of the user, for example: the user can view the 3D image with a hand-held terminal or view the 3D avatar using a head-mounted display device.
The 3D image may include a plurality of images of different perspectives, each of the images of the perspectives being a 3D image. For example: the first 3D image and the second 3D image in the application, wherein the first 3D image and the second 3D image are 3D images with continuous visual angles, and the first 3D image and the second 3D image can be switched back and forth.
The visual angle switching can be automatic switching or manual switching, the automatic switching can be that the terminal automatically plays the 3D image, and the manual switching can be that the touch input or the motion sensing input of the user is received.
Taking 3D images as 3D movies as an example, when a user watches the 3D movies by using the head-mounted display device, the 3D movie pictures are automatically played, namely the viewing angles are automatically switched; or when a user uses the head-mounted display device to play a game, the movement of the virtual character in the game can be controlled through hardware such as an operation handle or keys, and the view angle of the virtual character can be changed, namely, the view angle is manually switched; or when the user uses the head-mounted display device, the 3D image displayed by the head-mounted display device can complete visual angle switching based on the motion sensing input of the user by rotating the body or completing the motion sensing input when moving the body to other directions.
Taking a 3D image as an example of a mobile game picture, a manual mode of single-finger sliding operation can adjust a picture visual angle, for example, when a person in a hand tour needs to turn around 180 degrees, the person in a screen needs to be pressed by one finger, and then the person slides leftwards or rightwards for a certain distance to realize turning around of the person; or, the virtual key is adopted to complete the image overturning, for example, the virtual key is arranged in an interface of the navigation map, and when the user needs to switch from the overlook view angle to the main view angle, the view virtual button is pressed to switch the view angle.
It is understood that the viewing angle switching of the 3D image corresponds to switching between display contents of images of different viewing angles. And partial users can generate dizziness caused by rapid change of the display content of the 3D image in the visual field in a short time, even the users cannot effectively focus on the core image content, so that the experience of the users on the 3D image is greatly reduced.
Therefore, in the embodiment of the present application, when the display content is switched from the first 3D image to the second 3D image or from the second 3D image back to the first 3D image, an image offset between the first 3D image and the second 3D image is determined, and the image offset is used to determine whether the 3D image is in a state that may cause dizziness, such as shaking or rapid change.
And under the condition that the image offset meets the preset condition, the 3D image is in a shaking or rapid changing state.
The preset conditions include at least one of the following:
the magnitude of the image offset exceeds a first threshold;
the frequency of change of the image offset exceeds a second threshold, which may be positively correlated with the switching rate between the first 3D image and the second 3D image;
the first threshold and the second threshold may be determined according to actual requirements, and are not specifically limited herein.
And then determining an image area corresponding to the visual focus of the user in the second 3D image corresponding to the switched visual angles. The image content included in the image area corresponding to the visual focus of the user is the target image content.
When the visual angle of the 3D image is switched, the visual range of the user can be changed continuously, so that the visual focus of the user can be repositioned in real time, and the content of the target image can be determined dynamically.
And then, carrying out image processing on the target image content and the image content except the target image content in the second 3D image based on the target image content to obtain a third 3D image and displaying the third 3D image.
In actual implementation, the display parameters of the target image content can be not adjusted, or the display parameters of the image content which is closer to the visual focus of the user can not be adjusted, so that the reduction degree of the image content can be kept, and the user experience is prevented from being damaged. Since the attention of the user is not high for the image contents other than the target image contents, the display parameters of the image contents other than the target image contents can be adjusted so that the display parameters are different from the display parameters of the target image contents, and the stereoscopic effect of the target image contents in the third 3D image is ensured to be stronger than that of the image contents other than the target image contents. Through different stereoscopic visual effects, the immersive experience of the user to the 3D image is weakened, the substitution sense of the user to the 3D image is reduced, and dizziness caused by the user is avoided.
In this step, the display parameters of the target image content and the display parameters of the image content other than the target image content may be adjusted at the same time, so that the display parameters are different, and the stereoscopic effect of the target image content in the third 3D image is stronger than the stereoscopic effect of the image content other than the target image content.
According to the image processing method provided by the embodiment of the application, the image offset between the first 3D image and the second 3D image is determined under the condition that the display content is switched between the first 3D image and the second 3D image, so that whether the 3D image is in a state that a user is easy to generate dizziness or not can be judged; under the condition that the image offset meets the preset condition, determining target image content corresponding to the visual focus of the user in the first 3D image or the second 3D image, performing image processing on the first 3D image or the second 3D image to obtain a third 3D image and displaying the third 3D image, wherein the stereoscopic visual effect of the target image content in the displayed third 3D image is stronger than that of the image content except the target image content, so that the substitution feeling of the user and the sense of reality of the 3D image are reduced, the dizziness feeling of the user caused by the 3D image can be reduced, and the target image content in the visual focus of the user can keep good stereoscopic visual effect.
In some embodiments, the image offset is used to indicate an offset between the same image content in the first 3D image and the second 3D image. The image offset may reflect the change situation of the 3D image, and thus it may be estimated whether the 3D image is in a state of causing 3D dizziness.
It is understood that the same image content in the first 3D image and the second 3D image may be image content in the visual focus of the user or may be image content out of the visual focus of the user.
For example: the first 3D image is an image of a user driving a car in a game scene, and in the driving process, the scene image outside the car window is changed all the time, so that the same image content in the second 3D image and the first 3D image is the scene image inside the car, and the scene image outside the car window is different image content.
In some embodiments, determining an image offset between the first 3D image and the second 3D image comprises:
an image offset is determined based on the pixel coordinates of the first 3D image and the pixel coordinates of the second 3D image.
It will be appreciated that the first 3D image and the second 3D image include the same image content, and that the pixel coordinates of the same image content may change during the switching of the viewing angle.
In actual implementation, the pixel point coordinates of the feature points corresponding to the same image content in the first 3D image and the second 3D image can be identified through an artificial intelligence module built in the terminal, and the pixel point coordinate offset of the feature points corresponding to the same image content is calculated to be the image offset between the first 3D image and the second 3D image.
And then comparing the image offset with a first threshold value, and judging whether the 3D image is in a state that the user is easy to generate dizziness.
According to the image processing method provided by the embodiment of the application, the change condition of the 3D image can be determined in real time by determining the image offset between the same image content in the first 3D image and the second 3D image when the visual angle is changed, and the image processing can be timely performed according to the change condition of the 3D image, so that the possibility that a user generates dizziness due to the 3D image can be reduced.
In some embodiments, before determining the image offset between the first 3D image and the second 3D image, the image processing method further comprises:
receiving a first input of a user;
responsive to the first input, turning on a target mode;
determining an image offset between the first 3D image and the second 3D image comprises:
And determining an image offset when the target mode is on and the display content is switched between the first 3D image and the second 3D image.
In actual implementation, the terminal may automatically determine the change condition of the 3D image, or may manually set the target mode to start the determination of the 3D image.
In this step, the first input is used to turn on a target mode for performing image processing in the case where the image offset satisfies a preset condition. The target mode may be a function mode of the terminal system, and may achieve reduction of user dizziness when displaying the 3D image.
Wherein the first input may be at least one of:
first, the first input may be a touch operation including, but not limited to, a click operation, a slide operation, a press operation, and the like.
In this embodiment, the receiving the first input of the user may be receiving a touch operation of the user in a display area of the terminal display screen.
In order to reduce the user's misoperation rate, the action area of the first input may be limited to a specific area, such as the upper middle area of the interface corresponding to the 3D image; or displaying a target control corresponding to the target mode on the current interface in a state of displaying the interface corresponding to the 3D image, and touching the target control to realize the first input; or the first input is set as a continuous multi-tap operation of the display area within a target time interval.
Second, the first input may be a physical key input.
In this embodiment, the body of the terminal is provided with an entity key corresponding to the target mode, and the first input of the user is received, or the first input of the user pressing the corresponding entity key is received; the first input may also be a combined operation of simultaneously pressing a plurality of physical keys.
Third, the first input may be a voice input.
In this embodiment, the terminal may trigger the opening of the target mode when a voice such as "open reduced dizziness mode" is received.
Of course, in other embodiments, the first input may also be in other forms, including but not limited to character input, etc., which may be specifically determined according to actual needs, which is not limited in the embodiments of the present application.
After receiving the first input, the terminal may start the target mode in response to the first input. The terminal may determine an image offset between the first 3D image and the second 3D image to determine whether to perform image processing on the first 3D image or the second 3D image when the target mode is on and the display content is switched between the first 3D image and the second 3D image.
According to the image processing method provided by the embodiment of the application, the target mode is set, so that a user can select whether to start the target mode according to actual requirements, and the use experience of the user on the 3D image is improved.
In some embodiments, the display parameters include at least one of: image style, resolution, and brightness.
In actual implementation, the display parameters of the 3D image may include: one or more of image style, resolution, and brightness.
It should be noted that the image style refers to a special image effect. After the original image is subjected to image processing, an image with special image effect can be obtained. For example, it may be at least one of the following styles: caricature style, hand drawing style, oil drawing style, black and white style, sketch style, or color pencil drawing style, etc.
The resolution of an image refers to the amount of information stored in the image, i.e., how many pixels are within an image per inch.
The brightness of an image refers to the brightness of the image.
According to the image processing method provided by the embodiment of the application, the target image content and the image content except the target image content in the target image can be distinguished by adjusting the display parameters, so that the substitution sense of a user on the 3D image is reduced.
In some embodiments, where the display parameters include an image style, adjusting the display parameters corresponding to the first 3D image or the second 3D image includes: converting an image style of image contents other than the target image contents;
In the case where the display parameter includes a resolution, adjusting the display parameter corresponding to the first 3D image or the second 3D image includes: reducing resolution of image contents other than the target image content;
when the display parameter includes brightness, adjusting the display parameter corresponding to the first 3D image or the second 3D image includes: the brightness of the image contents other than the target image content is reduced.
In actual execution, in the case where the display parameter includes an image style, the first 3D image or the second 3D image is subjected to image processing, that is, the image style of the image content other than the target image content can be converted, thereby obtaining the third 3D image. For example, it may be converted into a comic style, a hand drawing style, a oiled painting style, a black-and-white style, a sketch style, a color pencil drawing style, or the like. By converting the image style of the image content other than the target image content, the sense of reality of the picture can be reduced while the image detail information is maintained, and further the sense of dizziness of the user can be reduced.
In case the display parameter comprises a resolution, the first 3D image or the second 3D image is image processed, i.e. the resolution of the image content other than the target image content is reduced, resulting in a third 3D image. The higher the resolution of the 3D image is, the more vivid the picture seen by the user is, and the higher the probability of causing dizziness of the user is, so that the resolution of the image content except the target image content can be reduced, the image quality of the part of the image content is reduced, and the substitution feeling of the user to the target image is reduced; and the resolution of the third 3D image content is kept unchanged, so that the substitution sense of the image content corresponding to the visual focus of the user is kept unchanged, and the dizziness sense of the user can be relieved.
In case the display parameter comprises brightness, the first 3D image or the second 3D image is subjected to image processing, i.e. by controlling the brightness of the image content other than the target image content to be lower than the brightness of the target image content, so that a third 3D image is obtained, such that the image content other than the target image content is seen darker than the target image content, and the attention of the user can be better focused on the target image content. When the brightness of the image content other than the target image content is sufficiently dark, the user hardly or not at all notices the image content other than the target image content. Therefore, the visual angle of the third 3D image seen by the user can be reduced, and the anti-dizziness effect is achieved.
In practical implementation, the image processing may be performed from the center to the periphery with the visual focus of the user as the center, and the higher the processing degree from the center to the periphery. For example: and after the image content corresponding to the visual focus of the user reaches the image content of the non-user visual focus, the rendering precision, the image resolution or the image brightness of the image style gradually decreases from inside to outside.
In some implementations, the image processing method provided by the present application may include the steps of:
Step 201, detecting that a user opens a 'dizziness reducing mode', and pre-selecting a displayed image style from a cartoon style, a hand painting style, a oiled painting style, a black-and-white style, a sketch style or a color pencil drawing style;
step 202, automatically capturing a visual focus of a user;
step 203, reducing the rendering fineness of the area beyond the visual focus of the user, and processing the area beyond the visual focus of the user according to the set image style, wherein the rendering precision is higher when the distance from the visual focus is nearer, and the rendering precision is lower when the distance from the visual focus is farther;
and 204, displaying the processed screen picture.
For example: after a user plays a 3D game through a mobile phone and opens a '3D dizziness prevention mode', if the system judges that a picture is in a shaking or rapid change state through artificial intelligence, the system automatically captures a visual focus of the user at the moment and carries out style conversion processing on the part outside the visual focus of the user. For example, in a scene that a player rides on a horse and catches a butterfly one by one, the system detects that the picture shakes when the player rides on the horse, and captures that the vision focus of the user is on the butterfly, and at the moment, the system can convert the style of the part except the butterfly into a cartoon style, a hand painting style or an oil painting style.
According to the image processing method provided by the embodiment of the application, the picture style conversion can be performed under the condition that the data information perceived by the user is hardly reduced, and the dizziness caused by the 3D image of the user is reduced.
According to the image processing method provided by the embodiment of the application, the execution subject can be an image processing device. In the embodiment of the present application, an image processing apparatus is described by taking an example of an image processing method performed by the image processing apparatus.
The embodiment of the application also provides an image processing device.
As shown in fig. 2, the image processing apparatus includes: the first determination module 210, the second determination module 220, and the image processing module 230.
A first determining module 210 for determining an image offset between a first 3D image and a second 3D image in case that display contents are switched between the first 3D image and the second 3D image, the first 3D image and the second 3D image being images of consecutive viewing angles;
a second determining module 220, configured to determine, in the first 3D image or the second 3D image, a target image content corresponding to a visual focus of a user if the image offset satisfies a preset condition;
The image processing module 230 is configured to adjust display parameters of the first 3D image or the second 3D image based on the target image content, obtain a third 3D image, and display the third 3D image, where a stereoscopic effect of the target image content in the third 3D image is stronger than a stereoscopic effect of image content other than the target image content.
In some embodiments, the image offset is used to indicate an offset between the same image content in the first 3D image and the second 3D image.
In some embodiments, the apparatus further comprises:
a receiving module, configured to receive a first input of a user before an image offset between the first 3D image and the second 3D image;
an opening module for opening a target mode in response to the first input;
the first determining module is specifically configured to:
and determining the image offset when the target mode is on and the display content is switched between the first 3D image and the second 3D image.
In some embodiments, the display parameters include at least one of: image style, resolution, and brightness.
In some embodiments, in the case where the display parameter includes the image style, the image processing module is specifically configured to: converting an image style of the image content other than the target image content;
In the case that the display parameter includes the resolution, the image processing module is specifically configured to: reducing the resolution of the image content other than the target image content;
in the case that the display parameter includes the brightness, the image processing module is specifically configured to: the brightness of the image contents other than the target image content is reduced.
According to the image processing device provided by the embodiment of the application, the image offset between the first 3D image and the second 3D image is determined under the condition that the display content is switched between the first 3D image and the second 3D image, so that whether the 3D image is in a state of easily enabling a user to generate dizziness or not can be judged; under the condition that the image offset meets the preset condition, determining target image content corresponding to the visual focus of the user in the first 3D image or the second 3D image, performing image processing on the first 3D image or the second 3D image to obtain a third 3D image and displaying the third 3D image, wherein the stereoscopic visual effect of the target image content in the displayed third 3D image is stronger than that of the image content except the target image content, so that the substitution feeling of the user and the sense of reality of the 3D image are reduced, the dizziness feeling of the user caused by the 3D image can be reduced, and the target image content in the visual focus of the user can keep good stereoscopic visual effect.
The image processing device in the embodiment of the application can be an electronic device, or can be a component in the electronic device, such as an integrated circuit or a chip. The electronic device may be a terminal, or may be other devices than a terminal. By way of example, the electronic device may be a mobile phone, tablet computer, notebook computer, palm computer, vehicle-mounted electronic device, mobile internet appliance (Mobile Internet Device, MID), augmented reality (augmented reality, AR)/Virtual Reality (VR) device, robot, wearable device, ultra-mobile personal computer, UMPC, netbook or personal digital assistant (personal digital assistant, PDA), etc., but may also be a server, network attached storage (Network Attached Storage, NAS), personal computer (personal computer, PC), television (TV), teller machine or self-service machine, etc., and the embodiments of the present application are not limited in particular.
The image processing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android operating system, an iOS operating system, or other possible operating systems, and the embodiment of the present application is not limited specifically.
The image processing device provided in the embodiment of the present application can implement each process implemented by the method embodiment of fig. 1, and in order to avoid repetition, a description is omitted here.
Optionally, as shown in fig. 3, the embodiment of the present application further provides an electronic device 300, including a processor 301 and a memory 302, where the memory 302 stores a program or an instruction that can be executed on the processor 301, and the program or the instruction implements each step of the embodiment of the image processing method when executed by the processor 301, and the steps achieve the same technical effects, so that repetition is avoided and no further description is given here.
The electronic device in the embodiment of the application includes the mobile electronic device and the non-mobile electronic device.
Fig. 4 is a schematic hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 400 includes, but is not limited to: radio frequency unit 401, network module 402, audio output unit 403, input unit 404, sensor 405, display unit 406, user input unit 407, interface unit 408, memory 409, and processor 410.
Those skilled in the art will appreciate that the electronic device 400 may also include a power source (e.g., a battery) for powering the various components, which may be logically connected to the processor 410 by a power management system to perform functions such as managing charge, discharge, and power consumption by the power management system. The electronic device structure shown in fig. 4 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than shown, or may combine certain components, or may be arranged in different components, which are not described in detail herein.
Wherein, the processor 410 is configured to determine an image offset between a first 3D image and a second 3D image in a case where display content is switched between the first 3D image and the second 3D image, the first 3D image and the second 3D image being images of consecutive viewing angles;
under the condition that the image offset meets a preset condition, determining target image content corresponding to a visual focus of a user in the first 3D image or the second 3D image;
and adjusting display parameters of the first 3D image or the second 3D image based on the target image content to obtain a third 3D image and displaying the third 3D image, wherein the stereoscopic effect of the target image content in the third 3D image is stronger than that of the image content except the target image content.
Optionally, the image offset is used to indicate an offset between the same image content in the first 3D image and the second 3D image.
Optionally, the user receiving unit 407 is configured to receive a first input of a user before the determining an image offset between the first 3D image and the second 3D image;
the processor 410 is further configured to turn on a target mode in response to the first input;
And determining the image offset when the target mode is on and the display content is switched between the first 3D image and the second 3D image.
Optionally, the display parameter includes at least one of: image style, resolution, and brightness.
Optionally, the processor 410 is further configured to convert the image style of the image content other than the target image content, if the display parameter includes the image style;
reducing resolution of the image content other than the target image content in a case where the display parameter includes the resolution;
in the case where the display parameter includes the brightness, the brightness of the image content other than the target image content is reduced.
According to the electronic device provided by the embodiment of the application, the image offset between the first 3D image and the second 3D image is determined under the condition that the display content is switched between the first 3D image and the second 3D image, so that whether the 3D image is in a state of easily enabling a user to generate dizziness or not can be judged; under the condition that the image offset meets the preset condition, determining target image content corresponding to the visual focus of the user in the first 3D image or the second 3D image, performing image processing on the first 3D image or the second 3D image to obtain a third 3D image and displaying the third 3D image, wherein the stereoscopic visual effect of the target image content in the displayed third 3D image is stronger than that of the image content except the target image content, so that the substitution feeling of the user and the sense of reality of the 3D image are reduced, the dizziness feeling of the user caused by the 3D image can be reduced, and the target image content in the visual focus of the user can keep good stereoscopic visual effect.
It should be appreciated that in embodiments of the present application, the input unit 404 may include a graphics processor (Graphics Processing Unit, GPU) 4041 and a microphone 4042, the graphics processor 4041 processing image data of still pictures or video obtained by an image capture device (e.g., a camera) in a video capture mode or an image capture mode. The display unit 406 may include a display panel 4061, and the display panel 4061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 407 includes at least one of a touch panel 4071 and other input devices 4072. The touch panel 4071 is also referred to as a touch screen. The touch panel 4071 may include two parts, a touch detection device and a touch controller. Other input devices 4072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and so forth, which are not described in detail herein.
Memory 409 may be used to store software programs as well as various data. The memory 409 may mainly include a first memory area storing programs or instructions and a second memory area storing data, wherein the first memory area may store an operating system, application programs or instructions (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like. Further, the memory 409 may include volatile memory or nonvolatile memory, or the memory 409 may include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (Random Access Memory, RAM), static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (ddr SDRAM), enhanced SDRAM (Enhanced SDRAM), synchronous DRAM (SLDRAM), and Direct RAM (DRRAM). Memory 409 in embodiments of the application includes, but is not limited to, these and any other suitable types of memory.
Processor 410 may include one or more processing units; optionally, the processor 410 integrates an application processor that primarily processes operations involving an operating system, user interface, application programs, etc., and a modem processor that primarily processes wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into the processor 410.
The embodiment of the application also provides a readable storage medium, on which a program or an instruction is stored, which when executed by a processor, implements each process of the above image processing method embodiment, and can achieve the same technical effects, and in order to avoid repetition, a detailed description is omitted here.
Wherein the processor is a processor in the electronic device described in the above embodiment. The readable storage medium includes computer readable storage medium such as computer readable memory ROM, random access memory RAM, magnetic or optical disk, etc.
The embodiment of the application further provides a chip, which comprises a processor and a communication interface, wherein the communication interface is coupled with the processor, and the processor is used for running programs or instructions to realize the processes of the embodiment of the image processing method, and can achieve the same technical effects, so that repetition is avoided, and the description is omitted here.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
Embodiments of the present application provide a computer program product stored in a storage medium, where the program product is executed by at least one processor to implement the respective processes of the above-described image processing method embodiments, and achieve the same technical effects, and for avoiding repetition, a detailed description is omitted herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a computer software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are to be protected by the present application.

Claims (11)

1. An image processing method, comprising:
determining an image offset between a first 3D image and a second 3D image in a case where display contents are switched between the first 3D image and the second 3D image, the first 3D image and the second 3D image being images of consecutive viewing angles;
under the condition that the image offset meets a preset condition, determining target image content corresponding to a visual focus of a user in the first 3D image or the second 3D image;
and adjusting display parameters of the first 3D image or the second 3D image based on the target image content to obtain a third 3D image and displaying the third 3D image, wherein the stereoscopic effect of the target image content in the third 3D image is stronger than that of the image content except the target image content.
2. The image processing method according to claim 1, wherein the image offset is used to indicate an offset between the same image contents in the first 3D image and the second 3D image.
3. The image processing method according to claim 1, wherein before the determining the image offset between the first 3D image and the second 3D image, the method further comprises:
Receiving a first input of a user;
responsive to the first input, turning on a target mode;
the determining an image offset between the first 3D image and the second 3D image includes:
and determining the image offset when the target mode is on and the display content is switched between the first 3D image and the second 3D image.
4. A method of image processing according to any one of claims 1 to 3, wherein the display parameters include at least one of: image style, resolution, and brightness.
5. The image processing method according to claim 4, wherein, in the case where the display parameter includes the image style, the adjusting the display parameter of the first 3D image or the second 3D image includes: converting an image style of the image content other than the target image content;
in the case where the display parameter includes the resolution, the adjusting the display parameter of the first 3D image or the second 3D image includes: reducing the resolution of the image content other than the target image content;
in the case where the display parameter includes the luminance, the adjusting the display parameter of the first 3D image or the second 3D image includes: the brightness of the image contents other than the target image content is reduced.
6. An image processing apparatus, comprising:
a first determining module for determining an image offset between a first 3D image and a second 3D image in a case where display contents are switched between the first 3D image and the second 3D image, the first 3D image and the second 3D image being images of consecutive viewing angles;
the second determining module is used for determining target image content corresponding to the visual focus of the user in the first 3D image or the second 3D image under the condition that the image offset meets a preset condition;
and the image processing module is used for adjusting the display parameters of the first 3D image or the second 3D image based on the target image content to obtain a third 3D image and displaying the third 3D image, wherein the stereoscopic vision effect of the target image content in the third 3D image is stronger than that of the image content except the target image content.
7. The image processing apparatus according to claim 6, wherein the image offset is used to indicate an offset between the same image content in the first 3D image and the second 3D image.
8. The image processing apparatus according to claim 6, wherein the apparatus further comprises:
A receiving module, configured to receive a first input of a user before an image offset between the first 3D image and the second 3D image;
an opening module for opening a target mode in response to the first input;
the first determining module is specifically configured to:
and determining the image offset when the target mode is on and the display content is switched between the first 3D image and the second 3D image.
9. The image processing apparatus according to any one of claims 6 to 8, wherein the display parameter includes at least one of: image style, resolution, and brightness.
10. The image processing apparatus according to claim 9, wherein in the case where the display parameter includes the image style, the image processing module is specifically configured to: converting an image style of the image content other than the target image content;
in the case that the display parameter includes the resolution, the image processing module is specifically configured to: reducing the resolution of the image content other than the target image content;
in the case that the display parameter includes the brightness, the image processing module is specifically configured to: the brightness of the image contents other than the target image content is reduced.
11. An electronic device comprising a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps of the image processing method of any of claims 1-5.
CN202310791091.4A 2023-06-29 2023-06-29 Image processing method, image processing device and electronic equipment Pending CN116723307A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310791091.4A CN116723307A (en) 2023-06-29 2023-06-29 Image processing method, image processing device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310791091.4A CN116723307A (en) 2023-06-29 2023-06-29 Image processing method, image processing device and electronic equipment

Publications (1)

Publication Number Publication Date
CN116723307A true CN116723307A (en) 2023-09-08

Family

ID=87875100

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310791091.4A Pending CN116723307A (en) 2023-06-29 2023-06-29 Image processing method, image processing device and electronic equipment

Country Status (1)

Country Link
CN (1) CN116723307A (en)

Similar Documents

Publication Publication Date Title
CN110675310B (en) Video processing method and device, electronic equipment and storage medium
CN111580652B (en) Video playing control method and device, augmented reality equipment and storage medium
US20120242852A1 (en) Gesture-Based Configuration of Image Processing Techniques
CN111701238A (en) Virtual picture volume display method, device, equipment and storage medium
CN108628515B (en) Multimedia content operation method and mobile terminal
CN113938748B (en) Video playing method, device, terminal, storage medium and program product
WO2023134583A1 (en) Video recording method and apparatus, and electronic device
CN111857910A (en) Information display method and device and electronic equipment
CN113282212A (en) Interface display method, interface display device and electronic equipment
US20220172440A1 (en) Extended field of view generation for split-rendering for virtual reality streaming
CN112905280B (en) Page display method, device, equipment and storage medium
WO2024051540A1 (en) Special effect processing method and apparatus, electronic device, and storage medium
CN112784081A (en) Image display method and device and electronic equipment
CN114995713B (en) Display control method, display control device, electronic equipment and readable storage medium
CN116723307A (en) Image processing method, image processing device and electronic equipment
CN115002549B (en) Video picture display method, device, equipment and medium
CN113014799B (en) Image display method and device and electronic equipment
CN114245017A (en) Shooting method and device and electronic equipment
CN113347356A (en) Shooting method, shooting device, electronic equipment and storage medium
CN114004922B (en) Bone animation display method, device, equipment, medium and computer program product
CN116308987A (en) Image processing chip, image processing method and electronic equipment
CN118034548A (en) Man-machine interaction method, system and electronic equipment
CN117097855A (en) Video playing method, device, electronic equipment and readable storage medium
CN114554097A (en) Display method, display device, electronic apparatus, and readable storage medium
CN114727151A (en) Video picture processing method and device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination