CN111158567B - Processing method and electronic equipment - Google Patents

Processing method and electronic equipment Download PDF

Info

Publication number
CN111158567B
CN111158567B CN201911403953.1A CN201911403953A CN111158567B CN 111158567 B CN111158567 B CN 111158567B CN 201911403953 A CN201911403953 A CN 201911403953A CN 111158567 B CN111158567 B CN 111158567B
Authority
CN
China
Prior art keywords
output device
image
parameter
output
visual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911403953.1A
Other languages
Chinese (zh)
Other versions
CN111158567A (en
Inventor
董芳菲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201911403953.1A priority Critical patent/CN111158567B/en
Publication of CN111158567A publication Critical patent/CN111158567A/en
Application granted granted Critical
Publication of CN111158567B publication Critical patent/CN111158567B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a processing method and electronic equipment, wherein the method comprises the following steps: obtaining image data of an environment where a first output device is located; processing the image data to obtain visual parameters of an environment in which the first output device is located; the visual parameter is capable of characterizing a visual effect of the environment or the presentation of the image data; and generating an adjusting instruction according to the visual parameters, wherein the adjusting instruction is used for determining a target image output by the first output device so that the target image output by the first output device is matched with the visual parameters. Therefore, according to the method and the device, the target image output by the output device can be determined according to the image data of the environment where the output device is located without manually selecting the image by a user, and the effect that the image output by the output device is matched with the visual parameters of the environment where the output device is located can be achieved, so that the operation flow of the user is reduced, and the operation complexity of the user is reduced.

Description

Processing method and electronic equipment
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a processing method and an electronic device.
Background
Currently, when an image on an electronic screen is output and displayed, the user generally selects the output of the image, which results in higher operation complexity for the user.
Disclosure of Invention
In view of the above, the present application provides a processing method, an apparatus and an electronic device, as follows:
a method of processing, comprising:
obtaining image data of an environment where a first output device is located;
processing the image data to obtain visual parameters of an environment in which the first output device is located; the visual parameter is capable of characterizing a visual effect of the environment or the presentation of the image data;
and generating an adjusting instruction according to the visual parameters, wherein the adjusting instruction is used for determining a target image output by the first output device so that the target image output by the first output device is matched with the visual parameters.
The method preferably, the determining the target image output by the first output device includes:
obtaining at least one parameter dimension of the visual parameter, the visual parameter having a parameter value in the parameter dimension;
and determining a target image which matches the parameter value of at least one parameter dimension as the target image output by the first output device.
The method preferably, the determining the target image output by the first output device includes:
and adjusting the target image output by the first output device so that the target image output by the first output device is matched with the parameter value of at least one parameter dimension of the visual parameter.
The above method, preferably, the matching of the target image output by the first output device with the parameter value of at least one parameter dimension of the visual parameter includes:
the target image output by the first output device matches a parameter value of at most one parameter dimension of the at least one parameter dimension of the visual parameters;
or,
the target image output by the first output device matches a parameter value of a target parameter dimension of the at least one parameter dimension of the visual parameters, the target parameter dimension being a dimension of the at least one parameter dimension having a highest dimension priority.
The above method, preferably, determining a target image matching a parameter value of at least one of the parameter dimensions, comprises:
obtaining at least one image with parameter value similarity higher than a similarity threshold value with the parameter dimension;
in the at least one image, a target image is obtained.
The method preferably obtains image data of an environment in which the first output device is located, and includes:
and receiving image data acquired and transmitted by an image acquisition device, wherein the image acquisition device is in the environment where the first output device is located.
In the above method, preferably, the image data includes at least a partial image of the first output device;
wherein processing the image data to obtain visual parameters of an environment in which the first output device is located comprises:
identifying a target image area in the image data that is adjacent to the image of the first output device;
and processing the target image area to obtain the visual parameters of the environment where the first output device is located.
Preferably, in the method, the image data does not include an image of the first output device, and processing the image data to obtain the visual parameter of the environment where the first output device is located includes:
identifying at least one object in the image data associated with the first output device;
and processing the image corresponding to the at least one object to obtain the visual parameters of the environment where the first output device is located.
The method preferably obtains image data of an environment in which the first output device is located, and includes:
receiving image data acquired and transmitted by an image acquisition component on second output equipment, wherein the second output equipment is in an environment where first output equipment is located, and the acquisition direction of the image acquisition component faces the first output equipment;
or,
the image acquisition component on the first output device is used for acquiring image data, the image acquisition component is arranged on the edge of the first output device, and the acquisition direction of the image acquisition component and the image output direction of the first output device are preset included angles.
A processing apparatus, comprising:
an image obtaining unit for obtaining image data of an environment in which the first output device is located;
the image processing unit is used for processing the image data to obtain visual parameters of the environment where the first output device is located; the visual parameter is capable of characterizing a visual effect of the environment or the presentation of the image data;
and the instruction generating unit is used for generating an adjusting instruction according to the visual parameters, wherein the adjusting instruction is used for determining the target image output by the first output device so as to enable the target image output by the first output device to be matched with the visual parameters.
An electronic device, comprising:
the image acquisition equipment is used for acquiring image data of the environment where the first output equipment is located;
a processor for processing the image data to obtain visual parameters of an environment in which the first output device is located; the visual parameter is capable of characterizing a visual effect of the environment or the presentation of the image data; generating an adjusting instruction according to the visual parameters;
a transmission interface, configured to transmit the adjustment instruction to the first output device, where the adjustment instruction is used to determine a target image output by the first output device, so that the target image output by the first output device matches the visual parameter.
According to the technical scheme, the processing method, the processing device and the electronic equipment disclosed by the application can obtain the visual parameters of the environment where the output equipment is located by obtaining the image data of the environment where the output equipment such as an electronic screen is located, so that the target image capable of determining the output equipment is generated based on the visual parameters, and the target image output by the output equipment is matched with the visual parameters of the environment where the output equipment is located. Therefore, according to the method and the device, the target image output by the output device can be determined according to the image data of the environment where the output device is located without manually selecting the image by a user, and the effect that the image output by the output device is matched with the visual parameters of the environment where the output device is located can be achieved, so that the operation flow of the user is reduced, and the operation complexity of the user is reduced.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a flowchart of a processing method according to an embodiment of the present disclosure;
FIGS. 2-10 are exemplary diagrams of embodiments of the present application, respectively;
fig. 11 is a schematic structural diagram of a processing apparatus according to a second embodiment of the present application;
fig. 12 is a schematic structural diagram of an output device according to a third embodiment of the present application;
fig. 13 is a schematic structural diagram of an electronic device according to a fourth embodiment of the present application;
fig. 14-17 are diagrams respectively illustrating an embodiment of the present application being applied to an image output of an electronic photo frame.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Referring to fig. 1, a flowchart of an implementation of a processing method provided in an embodiment of the present application is shown, where the method is applied to an electronic device capable of image processing, such as a mobile phone, a pad, a server, or an electronic photo frame. The method in the embodiment is mainly used for processing the image in the output device, such as an electronic photo frame, so that the image output by the output device is matched with the visual effect presented by the environment where the output device is located, and the user operation is not needed, thereby achieving the purpose of reducing the complexity of the user operation.
Specifically, the method in this embodiment may include the following steps:
step 101: image data of an environment in which the first output device is located is obtained.
In this embodiment, the image data of the environment where the first output device is located may be acquired by an image acquisition device or a component, such as a camera.
The image data may include all or part of the image of the first output device, as shown in fig. 2, in this embodiment, the image data of the environment where the first output device is located is obtained by performing image acquisition by the image acquisition component toward the location where the first output device is located;
or, the image data does not include the image of the first output device, as shown in fig. 3, in this embodiment, the image capturing component may not perform image capturing towards the position of the first output device, but the image capturing component is in the same environment as the output device, and further the image capturing component can capture the image data of the environment where the first output device is located.
It should be noted that the first output device in this embodiment may be an output device such as an electronic photo frame, an electronic screen, or a mobile phone screen, and the first output device is capable of outputting an image.
Step 102: the image data is processed to obtain visual parameters of an environment in which the first output device is located.
The visual parameters can represent visual effects presented by image data obtained from or above the environment where the first output device is located, such as cool tone or European style.
Specifically, in this embodiment, the image data may be processed by performing image processing on the image data, for example, an image text recognition algorithm, an image color recognition algorithm, an object recognition algorithm in the image, and the like, so as to obtain a visual parameter in the environment where the first output device is located, such as a text content, a color, or an object, and to represent the environment where the first output device is located or a visual effect that can be presented by the image data.
Step 103: and generating an adjusting instruction according to the visual parameters.
The generated adjusting instruction is used for determining the target image output by the first output device so that the target image output by the first output device is matched with the visual parameters.
It should be noted that, the adjusting instruction is used to determine the target image output by the first output device, and may be: the adjusting instruction is used for adjusting the target image currently output by the first output device, namely modifying the current image; or may be: the adjustment instructions are used to determine the target image to be output by the first output device, i.e. to replace the current image with a new image.
Specifically, in this embodiment, the image display parameter of the target image output by the first output device may be determined according to the visual parameter, or the target image newly output by the first output device may be determined according to the visual parameter, so that the visual effect presented by the target image output by the first output device is finally matched with the visual effect presented by the environment where the first output device is located or the corresponding image data represented by the visual parameter, for example, the color tone of the target image output by the first output device is completely consistent with the color tone of the environment, and both are warm tones; or the home furnishing style in the target image output by the first output device is approximately consistent with the home furnishing style in the environment, for example, the home furnishing style is similar to the style of the British wind and the Italian style.
It can be seen from the foregoing technical solutions that, in the processing method provided in the first embodiment of the present application, the visual parameters of the environment where the output device is located are obtained by obtaining the image data of the environment where the output device is located, such as an electronic screen, and therefore, a target image capable of determining the output device is generated based on the visual parameters, and the target image output by the output device is further matched with the visual parameters of the environment where the output device is located. Therefore, in the embodiment, the target image output by the output device can be determined according to the image data of the environment where the output device is located without manually selecting the image by the user, and the effect that the image output by the output device is matched with the visual parameters of the environment where the output device is located can be achieved, so that the operation flow of the user is reduced, the operation complexity of the user is reduced, and the image output efficiency is improved.
In one implementation, after obtaining the visual parameters, when determining the target image output by the first output device according to the visual parameters, a new image may be found as the target image according to the visual parameters, where the new image is matched with the visual parameters, which may be specifically implemented in the following manner, so as to generate a corresponding adjustment instruction:
firstly, obtaining at least one parameter dimension in visual parameters, such as color dimension, texture dimension, outline dimension, style dimension and other parameter dimensions, wherein the visual parameters have corresponding parameter values in the parameter dimensions, such as blue, spot texture, round outline, Chinese style and other parameter values;
then, in the image set containing at least one image, determining a target image which matches the parameter value of at least one parameter dimension as the target image output by the first output device.
Specifically, the target image is found in the image set according to the following rules: the searched image is matched to the parameter value of the most parameter dimension of at least one parameter dimension of the visual parameters, that is, the target image searched in the image set can be matched to the parameter value of the most parameter dimension of all parameter dimensions of the visual parameters. For example, searching a target image capable of matching all parameter dimension parameter values such as color dimension, texture dimension, contour dimension and style dimension in the visual parameters in the image set, or, if no image in the image set can match all parameter dimension parameter values, searching an image matching the most parameter dimension parameter values in the visual parameters as the target image, such as searching a target image matching color dimension, texture dimension and style dimension (contour dimension cannot be matched) the most parameter dimension parameter values;
or, searching the target image in the image set, and the rule for searching the target image is as follows: the searched image is matched to the parameter value of a particular target parameter dimension of the at least one parameter dimension of the visual parameters, i.e. the searched target image in the set of images can match the parameter value of one or more target parameter dimensions specified in the visual parameters. The target parameter dimension may be understood as one or more parameter dimensions with priority in the visual parameters, for example, one target parameter dimension is one, i.e., the highest priority parameter dimension in the visual parameters, or a plurality of target parameter dimensions are provided, e.g., the parameter dimensions including the highest priority and the next highest priority. As can be seen, in this embodiment, the parameter dimension in the visual parameter is specified, so that the searched target image is an image matching one or more target parameter dimension parameter values with the priority ranked at the top.
It should be noted that, when the target parameter dimensions are multiple, in this embodiment, a target image that matches the multiple target parameter dimension parameter values at the same time may be searched in the image set, and a target image that matches one of the target parameter dimension parameter values may also be sequentially searched in the image set according to the priority order of the multiple target parameter dimensions. For example, if a target image matching the target parameter dimensional parameter value of the highest priority is not found in the image set, a target image matching the target parameter dimensional parameter value of the next highest priority may be further found, and so on until a target image matching a target parameter dimensional parameter value is found. For example, one or more target images that can match the parameter value in the color dimension with the highest priority among the visual parameters are found in the image set, and if a target image that matches pink in the color dimension is not found in the image set, one or more target images that match the parameter value in the texture dimension are further found in the image set until a target image that matches a certain target parameter dimension parameter value among the visual parameters is found in the image set.
Based on the above implementation, in the present embodiment, the parameter value of at least one parameter dimension in the target image matching visual parameters may be understood as: the similarity between the target image and the parameter value of at least one parameter dimension of the visual parameter is higher than the similarity threshold, for example, the similarity between the image display parameter of the target image in one or more parameter dimensions, such as texture, color, style, and the like, and the visual parameter is greater than 80%, and the target image is determined to be the target image to be output by the first output device.
Specifically, in this embodiment, when determining a target image matching at least one parameter dimension parameter value, one or more images with a similarity higher than a similarity threshold with the parameter value of the parameter dimension may be obtained in the image set, for example, by obtaining the similarity between each image in the image set and the parameter value of the parameter dimension in the visual parameter, one or more images with a similarity higher than the similarity threshold are screened out, and then the target image is obtained from the images.
In one implementation, after obtaining the visual parameters, when determining the target image output by the first output device according to the visual parameters, the target image currently being output by the first output device may be adjusted according to the visual parameters, and the adjusted target image is matched with the visual parameters, which may be specifically implemented in the following manner so as to generate corresponding adjustment instructions:
the target image output by the first output device is adjusted, for example, one or more image display parameters of the target image are adjusted, so that the target image output by the first output device can be matched with the parameter value of at least one parameter dimension in the visual parameters.
Specifically, when adjusting the image display parameter of the target image output by the first output device, the rule is as follows: the adjusted target image is matched to the parameter value of the largest parameter dimension of the at least one parameter dimension of the visual parameters, that is, the adjusted target image with the image display parameters can be matched to the parameter value of the largest parameter dimension of all the parameter dimensions of the visual parameters. For example, in the step of adjusting one or more of the image display parameters of the target image, such as color, texture, contour, and style, so that the adjusted target image can match all parameter dimension parameter values, such as color dimension, texture dimension, contour dimension, and style dimension, in the visual parameters, or the adjusted target image can match the image with the parameter value of the largest parameter dimension in the visual parameters as the target image, for example, although the contour parameter value in the contour dimension cannot be matched, the adjusted target image can match the parameter value of the largest parameter dimension in the visual parameters, such as color dimension, texture dimension, and style dimension (the contour dimension cannot be matched);
or, the image display parameters of the target image output by the first output device are adjusted according to the following rules: the adjusted target image is matched to the parameter value of a particular target parameter dimension of the at least one parameter dimension of the visual parameters, i.e., the adjusted image display parameters are capable of matching the parameter values of the one or more target parameter dimensions specified in the visual parameters. The target parameter dimension may be understood as one or more parameter dimensions with priority in the visual parameters, for example, one target parameter dimension is one, i.e., the highest priority parameter dimension in the visual parameters, or a plurality of target parameter dimensions are provided, e.g., the parameter dimensions including the highest priority and the next highest priority. As can be seen, in this embodiment, the parameter dimension in the visual parameter is specified, so that the adjusted target image is an image that matches one or more parameter values of the target parameter dimension with the highest priority.
It should be noted that, when the target parameter dimensions are multiple, in this embodiment, the target image may be adjusted to simultaneously match the multiple target parameter dimension parameter values, or the target image may be adjusted to search for and match one of the target parameter dimension parameter values according to the priority order of the multiple target parameter dimensions. For example, first, an attempt is made to adjust the target image to match the highest priority target parameter dimensional parameter value, and if the target image cannot be adjusted to match the highest priority target parameter dimensional parameter value, the target image may be further adjusted to match the next highest priority target parameter dimensional parameter value, and so on, until the adjusted target image can match one target parameter dimensional parameter value. For example, when the target image is adjusted, the texture display parameter of the target image may be adjusted to match the parameter value of the texture dimension with the highest priority in the visual parameters, and if the texture display parameter in the target image cannot be adjusted to match the texture of the curved pattern in the texture dimension in the visual parameters, the color display parameter in the target image may be adjusted to match the pink color in the color dimension in the visual parameters until the target image is adjusted to match the parameter value of the certain target parameter dimension in the visual parameters.
Based on the above implementation, in the present embodiment, the adjusting of the parameter value of the at least one parameter dimension in the image display parameter matching visual parameter of the target image may be understood as: the similarity between the image display parameter of the target image and the parameter value of the at least one parameter dimension of the visual parameter is higher than a similarity threshold, e.g., the similarity between the image display parameter of the target image and the visual parameter in one or more parameter dimensions, such as texture, color, style, etc., is greater than 80%.
In one implementation, in step 101, when obtaining the image data of the environment in which the first output device is located, the image acquisition device may be specifically implemented, for example, to receive the image data acquired and transmitted by the image acquisition device, where the image acquisition device is located in the environment in which the first output device is located, and thus, the obtained image data belongs to the image device of the environment in which the first output device is located.
The image acquisition device is independent from the first output device, that is, in this embodiment, the image acquisition device different from the first output device is used to acquire image data of an environment where the first output device is located, and then a target image output by the first output device is determined after the visual parameters are identified, so that the target image output by the first output device is matched with the visual parameters corresponding to the environment where the first output device is located.
Specifically, the image capturing device may be an image capturing device disposed at a specific position in an environment where the first output device is located, such as a camera disposed on a wall body as shown in fig. 4, where the first output device, such as an electronic photo frame, is disposed indoors and located in the same living room environment as the camera; or, the image capturing device may also be an image capturing device that is disposed on the mobile terminal in an environment where the first output device is located, such as a camera on a mobile phone shown in fig. 5, where the mobile phone and the first output device are disposed in the same room, such as an electronic photo frame.
In another implementation, the image capturing device may be a component belonging to a first output device or may be a component belonging to a second output device of the same type as the first output device, that is, in the embodiment, the image capturing component (image capturing device) of the first output device is utilized to capture image data of an environment in which the first output device is located. As shown in fig. 6, at this time, an image capturing component, such as a camera, on a first output device, such as an electronic photo frame, may be disposed at an edge of the first output device, and a capturing direction of the image capturing component forms a preset included angle, such as 90 degrees, with an image output direction of the first output device; or, in this embodiment, an image acquisition component (image acquisition device) on a second output device in an environment where the first device is located may be used to acquire image data of the environment where the first output device is located, as shown in fig. 7, an acquisition direction of an image acquisition component, such as a camera, on the second output device, such as an electronic photo frame B is toward the first output device, such as the electronic photo frame a.
In an implementation manner, the image data includes at least a part of an image of the first output device, as shown in fig. 8, the image data includes an image of a lower left corner region of the electronic picture frame and also includes an image of a non-electronic picture frame, at this time, in this embodiment, when the step 102 processes the image data to obtain the visual parameter of the environment where the first output device is located, the following manner may be specifically implemented:
first, a target image area adjacent to the image of the first output device in the image data, that is, a target image area other than the first output device in the image data is identified, as shown by a hatched portion in fig. 9, a target image area X outside the electronic picture frame is identified;
the target image area is then processed to obtain visual parameters of the environment in which the first output device is located. That is to say, in the present embodiment, by identifying the image area in the image data that is not the first output device, the visual parameter of the environment where the first output device is located is identified. For example, the target image area X in fig. 9 is subjected to image recognition to identify parameter values in one or more of the dimensions of color, texture, contour, and style, so as to obtain visual parameters to represent the environment where the electronic photo frame is located or the visual effect presented by the image data.
In another implementation manner, the image data does not include an image of the first output device, as shown in fig. 10, at this time, although the image capturing device or component that captures the image data is in the environment where the first output device is located, the capturing direction of the image capturing device or component is not toward the first output device, and therefore, the image data does not include an image of the first output device although it is the image data of the environment where the first output device is located, but certainly includes image data of other elements or objects Y in the environment where the first output device is located, such as a home object such as a sofa, a wall, or a desk lamp that is in the same indoor environment as the electronic photo frame, and therefore, in this embodiment, when the step 102 processes the image data to obtain the visual parameters of the environment where the first output device is located, it may specifically be implemented by:
firstly, at least one object associated with the first output device in the image data is identified, for example, a sofa object or a table lamp object Y related to the electronic photo frame in fig. 10 is identified, and then, the images corresponding to the objects are processed, for example, parameter values in one or any multiple parameter dimensions such as color, texture, outline and style are identified for image areas corresponding to the objects, so as to obtain visual parameters, so as to represent the environment where the electronic photo frame is located or the visual effect presented by the image data.
It should be noted that, in the above embodiments, object recognition on image data, parameter recognition on image data, and the like may be implemented by using an image recognition algorithm or a recognition model created based on machine learning, a neural network, or the like, so as to improve accuracy and efficiency of image processing.
Referring to fig. 11, a schematic structural diagram of a processing apparatus provided in the second embodiment of the present application is shown, where the processing apparatus may be configured in an electronic device capable of performing image processing, such as a mobile phone, a pad, a server, or an electronic photo frame. The device in the embodiment is mainly used for processing images in output equipment such as an electronic photo frame, so that the images output by the output equipment are matched with the visual effect presented by the environment where the output equipment is located, user operation is not needed, and the purpose of reducing the complexity of user operation is achieved.
Specifically, the apparatus in this embodiment may include the following units:
an image obtaining unit 1101 for obtaining image data of an environment in which the first output device is located;
an image processing unit 1102, configured to process the image data to obtain a visual parameter of an environment in which the first output device is located; the visual parameter is capable of characterizing a visual effect of the environment or the presentation of the image data;
an instruction generating unit 1103, configured to generate an adjustment instruction according to the visual parameter, where the adjustment instruction is used to determine a target image output by the first output device, so that the target image output by the first output device matches the visual parameter.
It can be seen from the foregoing technical solution that, in the processing apparatus provided in the second embodiment of the present application, the visual parameter of the environment where the output device is located is obtained by obtaining the image data of the environment where the output device is located, such as an electronic screen, and therefore a target image that can be determined for the output device is generated based on the visual parameter, and the target image output by the output device is further matched with the visual parameter of the environment where the output device is located. Therefore, in the embodiment, the target image output by the output device can be determined according to the image data of the environment where the output device is located without manually selecting the image by the user, and the effect that the image output by the output device is matched with the visual parameters of the environment where the output device is located can be achieved, so that the operation flow of the user is reduced, the operation complexity of the user is reduced, and the image output efficiency is improved.
In one implementation, the instruction generating unit 1103 determines the target image output by the first output device, including:
obtaining at least one parameter dimension of the visual parameter, the visual parameter having a parameter value in the parameter dimension;
and determining a target image which matches the parameter value of at least one parameter dimension as the target image output by the first output device.
In another implementation, the instruction generating unit 1103 determines that the target image is output by the first output device, including:
and adjusting the target image output by the first output device so that the target image output by the first output device is matched with the parameter value of at least one parameter dimension of the visual parameter.
Based on the above implementation, the matching of the target image output by the first output device with the parameter values of at least one parameter dimension of the visual parameters includes:
the target image output by the first output device matches a parameter value of at most one parameter dimension of the at least one parameter dimension of the visual parameters;
or the target image output by the first output device matches a parameter value of a target parameter dimension of the at least one parameter dimension of the visual parameters, wherein the target parameter dimension is a dimension with highest priority among the at least one parameter dimension.
Optionally, the instruction generating unit 1103 determines a target image matching a parameter value of at least one of the parameter dimensions, including:
obtaining at least one image with parameter value similarity higher than a similarity threshold value with the parameter dimension;
in the at least one image, a target image is obtained.
In one implementation, the image obtaining unit 1101 obtains image data of an environment in which the first output device is located, including:
and receiving image data acquired and transmitted by an image acquisition device, wherein the image acquisition device is in the environment where the first output device is located.
Optionally, the image data includes at least a partial image of the first output device; the image processing unit 1102 is configured to process the image data to obtain visual parameters of an environment where the first output device is located, including:
identifying a target image area in the image data that is adjacent to the image of the first output device; and processing the target image area to obtain the visual parameters of the environment where the first output device is located.
Optionally, the image data does not include an image of the first output device, where the image processing unit 1102 processes the image data to obtain the visual parameters of the environment where the first output device is located, including:
identifying at least one object in the image data associated with the first output device; and processing the image corresponding to the at least one object to obtain the visual parameters of the environment where the first output device is located.
In one implementation, the image obtaining unit 1101 obtains image data of an environment in which the first output device is located, including:
receiving image data acquired and transmitted by an image acquisition component on second output equipment, wherein the second output equipment is in an environment where first output equipment is located, and the acquisition direction of the image acquisition component faces the first output equipment;
in another implementation, the image obtaining unit 1101 obtains image data of an environment in which the first output device is located, including:
the image acquisition component on the first output device is used for acquiring image data, the image acquisition component is arranged on the edge of the first output device, and the acquisition direction of the image acquisition component and the image output direction of the first output device are preset included angles.
It should be noted that, for the specific implementation of each unit in the present embodiment, reference may be made to the corresponding content in the foregoing, and details are not described here.
Referring to fig. 12, a schematic structural diagram of an output device provided in the third embodiment of the present application is shown, where the output device may be a device capable of performing image processing, such as a device like an electronic photo frame. The output device in this embodiment is mainly used for processing an image in an output device, such as an electronic photo frame, so that the image output by the output device matches with a visual effect presented by an environment where the output device is located, and user operation is not required, thereby achieving the purpose of reducing complexity of user operation.
Specifically, the output device in this embodiment may include the following structure:
a display 1201 for outputting an image.
The display 1201 may be a display such as a liquid crystal display, and may output an image.
A processor 1202 for obtaining image data of an environment in which the output device is located; processing the image data to obtain visual parameters of the environment where the output device is located; the visual parameter is capable of characterizing a visual effect of the environment or the presentation of the image data; and generating an adjusting instruction according to the visual parameters, wherein the adjusting instruction is used for determining a target image output by the output device so that the target image output by the output device is matched with the visual parameters.
According to the third embodiment of the present application, the visual parameters of the environment where the output device is located are obtained by obtaining the image data of the environment where the output device is located, such as an electronic screen, and the like, so that the target image capable of determining the output device is generated based on the visual parameters, and the target image output by the output device is matched with the visual parameters of the environment where the output device is located. Therefore, in the embodiment, the target image output by the output device can be determined according to the image data of the environment where the output device is located without manually selecting the image by the user, and the effect that the image output by the output device is matched with the visual parameters of the environment where the output device is located can be achieved, so that the operation flow of the user is reduced, the operation complexity of the user is reduced, and the image output efficiency is improved.
It should be noted that, for the specific implementation of each part in the present embodiment, reference may be made to the corresponding content in the foregoing, and details are not described here.
Referring to fig. 13, a schematic structural diagram of an electronic device according to a fourth embodiment of the present disclosure is shown, where the electronic device may be an electronic device capable of performing image processing, such as a mobile phone, a pad, a server, and the like. The electronic device in this embodiment is mainly used for processing an image in an output device, such as an electronic photo frame, so that the image output by the output device matches with a visual effect presented by an environment where the output device is located, and user operation is not required, thereby achieving the purpose of reducing complexity of user operation.
Specifically, the electronic device in this embodiment may include the following structure:
the image acquisition device 1301 is configured to obtain image data of an environment where the first output device is located.
The image capture device 1301 may be a camera or the like, and can capture image data. Image capture device 1301 may or may not be facing the first output device, but image capture device 1301 is in the same environment as the first output device.
A processor 1302, configured to process the image data to obtain a visual parameter of an environment in which the first output device is located; the visual parameter is capable of characterizing a visual effect of the environment or the presentation of the image data; generating an adjusting instruction according to the visual parameters;
and a transmission interface 1303, configured to transmit the adjustment instruction to the first output device, where the adjustment instruction is used to determine a target image output by the first output device, so that the target image output by the first output device matches the visual parameter.
The transmission interface 1303 may be a transmission interface such as WiFi, bluetooth, or a network, so that the adjustment instruction can be transmitted from the electronic device to the first output device, so that the target image output by the first output device matches the visual parameter.
According to the technical scheme, the electronic device provided by the fourth embodiment of the application obtains the visual parameters of the environment where the output device is located by obtaining the image data of the environment where the output device is located, such as an electronic screen, and accordingly generates the target image capable of determining the output device based on the visual parameters, and the target image output by the output device is matched with the visual parameters of the environment where the output device is located. Therefore, in the embodiment, the target image output by the output device can be determined according to the image data of the environment where the output device is located without manually selecting the image by the user, and the effect that the image output by the output device is matched with the visual parameters of the environment where the output device is located can be achieved, so that the operation flow of the user is reduced, the operation complexity of the user is reduced, and the image output efficiency is improved.
It should be noted that, for the specific implementation of each unit in the present embodiment, reference may be made to the corresponding content in the foregoing, and details are not described here.
Taking the first output device as an example of an electronic photo frame hung on a wall, in the technical scheme of the application, when the hung electronic photo frame device starts to be used, a mobile phone can be used for setting some basic parameters, such as image switching frequency, an image database or a link, and then a mobile phone camera can be opened or a camera on the electronic photo frame can be opened to collect a background image of the electronic photo frame, so as to identify the color or texture material of the background wall of the electronic photo frame, and further, when the image is shown on the electronic photo frame, an image matching the identified environmental style of the electronic photo frame can be automatically selected, such as an output effect of a blue tone in the background wall in fig. 14 and an output effect of a pink tone in the background wall in fig. 15 are attached, and then an output effect of a simple style in fig. 16 and an output effect of a home original wood color in fig. 17 are attached. Therefore, by adopting the technical scheme of the application, the output equipment such as the electronic photo frame and the like can be better integrated into the environment, and the intelligent effect is achieved, so that the visual experience of a user is enriched.
It should be noted that, in this embodiment, after an image acquisition device of an electronic device, such as a camera on a mobile phone, acquires a background image of a first output device, such as an electronic photo frame, a processor of the mobile phone may perform image processing, and after an adjustment instruction is generated, the electronic photo frame is instructed to adjust an output target image thereof according to the adjustment instruction, or after an adjustment target image is adjusted according to the adjustment instruction on one side of the mobile phone, the adjustment target image is output to the electronic photo frame;
or, in this embodiment, after the camera on the electronic photo frame acquires the background image of the electronic photo frame, the processor on the electronic photo frame may perform image processing, and after generating the adjustment instruction, the target image output in the electronic photo frame is directly adjusted according to the adjustment instruction;
or in this embodiment, after the background image of the electronic photo frame is acquired by the camera on the mobile phone or the electronic photo frame, the background image may be uploaded to the cloud server for image processing, and after the adjustment instruction is generated, the electronic photo frame is instructed to adjust the target image output by the electronic photo frame according to the adjustment instruction, or the target image is adjusted on one side of the cloud server according to the adjustment instruction and then output to the electronic photo frame.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (6)

1. A method of processing, comprising:
obtaining image data of an environment where a first output device is located;
processing the image data to obtain visual parameters of an environment in which the first output device is located; the visual parameter is capable of characterizing a visual effect of the environment or the presentation of the image data;
generating an adjusting instruction according to the visual parameters, wherein the adjusting instruction is used for determining a target image output by the first output device so that the target image output by the first output device is matched with the visual parameters;
the image data comprises at least a part of the image of the first output device; wherein processing the image data to obtain visual parameters of an environment in which the first output device is located comprises: identifying a target image area in the image data that is adjacent to the image of the first output device; processing the target image area to obtain visual parameters of the environment where the first output device is located;
determining a target image output by the first output device, comprising: obtaining at least one parameter dimension of the visual parameter, the visual parameter having a parameter value in the parameter dimension; determining a target image matching a parameter value of at least one of the parameter dimensions as a target image output by the first output device;
or, determining the target image output by the first output device includes: adjusting the target image output by the first output device so that the target image output by the first output device is matched with the parameter value of at least one parameter dimension of the visual parameter;
the first output device outputs a target image matched with a parameter value of at least one parameter dimension of the visual parameter, including:
the target image output by the first output device matches a parameter value of at most one parameter dimension of the at least one parameter dimension of the visual parameters;
or,
the target image output by the first output device matches a parameter value of a target parameter dimension of the at least one parameter dimension of the visual parameters, the target parameter dimension being a dimension of the at least one parameter dimension having a highest dimension priority.
2. The method of claim 1, determining a target image matching a parameter value of at least one of the parameter dimensions, comprising:
obtaining at least one image with parameter value similarity higher than a similarity threshold value with the parameter dimension;
in the at least one image, a target image is obtained.
3. The method of claim 1, obtaining image data of an environment in which the first output device is located, comprising:
and receiving image data acquired and transmitted by an image acquisition device, wherein the image acquisition device is in the environment where the first output device is located.
4. The method of claim 1, obtaining image data of an environment in which the first output device is located, comprising:
receiving image data acquired and transmitted by an image acquisition component on second output equipment, wherein the second output equipment is in an environment where first output equipment is located, and the acquisition direction of the image acquisition component faces the first output equipment;
or,
the image acquisition component on the first output device is used for acquiring image data, the image acquisition component is arranged on the edge of the first output device, and the acquisition direction of the image acquisition component and the image output direction of the first output device are preset included angles.
5. The method of claim 1, the image data not including an image of the first output device, wherein processing the image data to obtain visual parameters of an environment in which the first output device is located comprises:
identifying at least one object in the image data associated with the first output device;
and processing the image corresponding to the at least one object to obtain the visual parameters of the environment where the first output device is located.
6. An electronic device, comprising:
the image acquisition equipment is used for acquiring image data of the environment where the first output equipment is located;
a processor for processing the image data to obtain visual parameters of an environment in which the first output device is located; the visual parameter is capable of characterizing a visual effect of the environment or the presentation of the image data; generating an adjusting instruction according to the visual parameters;
a transmission interface, configured to transmit the adjustment instruction to the first output device, where the adjustment instruction is used to determine a target image output by the first output device, so that the target image output by the first output device matches the visual parameter;
the image data comprises at least a part of the image of the first output device; wherein processing the image data to obtain visual parameters of an environment in which the first output device is located comprises: identifying a target image area in the image data that is adjacent to the image of the first output device; processing the target image area to obtain visual parameters of the environment where the first output device is located;
the determining the target image output by the first output device comprises: obtaining at least one parameter dimension of the visual parameter, the visual parameter having a parameter value in the parameter dimension; determining a target image matching a parameter value of at least one of the parameter dimensions as a target image output by the first output device;
or, determining the target image output by the first output device includes: adjusting the target image output by the first output device so that the target image output by the first output device is matched with the parameter value of at least one parameter dimension of the visual parameter;
the first output device outputs a target image matched with a parameter value of at least one parameter dimension of the visual parameter, including:
the target image output by the first output device matches a parameter value of at most one parameter dimension of the at least one parameter dimension of the visual parameters;
or,
the target image output by the first output device matches a parameter value of a target parameter dimension of the at least one parameter dimension of the visual parameters, the target parameter dimension being a dimension of the at least one parameter dimension having a highest dimension priority.
CN201911403953.1A 2019-12-30 2019-12-30 Processing method and electronic equipment Active CN111158567B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911403953.1A CN111158567B (en) 2019-12-30 2019-12-30 Processing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911403953.1A CN111158567B (en) 2019-12-30 2019-12-30 Processing method and electronic equipment

Publications (2)

Publication Number Publication Date
CN111158567A CN111158567A (en) 2020-05-15
CN111158567B true CN111158567B (en) 2022-03-25

Family

ID=70559612

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911403953.1A Active CN111158567B (en) 2019-12-30 2019-12-30 Processing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN111158567B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114385280A (en) * 2020-10-16 2022-04-22 华为技术有限公司 Parameter determination method and electronic equipment

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102420896A (en) * 2010-09-27 2012-04-18 上海三旗通信科技有限公司 Operation method for switching thematic colour schemes of mobile terminal
CN105744336B (en) * 2014-12-11 2019-01-01 Tcl光电科技(惠州)有限公司 The method and system of the display styles of automatic replacement smart television
CN104657064A (en) * 2015-03-20 2015-05-27 上海德晨电子科技有限公司 Method for realizing automatic exchange of theme desktop for handheld device according to external environment
CN109426522A (en) * 2017-08-22 2019-03-05 阿里巴巴集团控股有限公司 Interface processing method, device, equipment, medium and the operating system of mobile device
CN108322719A (en) * 2018-02-12 2018-07-24 京东方科技集团股份有限公司 Head-up-display system and new line display methods, mobile devices
CN110275973B (en) * 2019-06-21 2023-01-06 京东方科技集团股份有限公司 Display method of image display device and electronic equipment

Also Published As

Publication number Publication date
CN111158567A (en) 2020-05-15

Similar Documents

Publication Publication Date Title
US11503205B2 (en) Photographing method and device, and related electronic apparatus
CN105187810B (en) A kind of auto white balance method and electronic medium device based on face color character
CN107566717B (en) Shooting method, mobile terminal and computer readable storage medium
KR101605983B1 (en) Image recomposition using face detection
CN107862018B (en) Recommendation method and device for food cooking method
KR101725884B1 (en) Automatic processing of images
CN108198177A (en) Image acquiring method, device, terminal and storage medium
CN111415302B (en) Image processing method, device, storage medium and electronic equipment
CN106529406A (en) Method and device for acquiring video abstract image
JP2020119156A (en) Avatar creating system, avatar creating device, server device, avatar creating method and program
CN106815803B (en) Picture processing method and device
CN107623819A (en) A kind of method taken pictures and mobile terminal and related media production
CN111158567B (en) Processing method and electronic equipment
CN110363036B (en) Code scanning method and device based on wire controller and code scanning system
WO2017092345A1 (en) Display method and device for virtual device image
CN112532911A (en) Image data processing method, device, equipment and storage medium
CN108769538B (en) Automatic focusing method and device, storage medium and terminal
TWI397024B (en) Method for image auto-selection and computer system
CN114531564A (en) Processing method and electronic equipment
CN112449115B (en) Shooting method and device and electronic equipment
CN113012042B (en) Display device, virtual photo generation method, and storage medium
CN112839167A (en) Image processing method, image processing device, electronic equipment and computer readable medium
CN109040778B (en) Video cover determining method, user equipment, storage medium and device
CN116681613A (en) Illumination-imitating enhancement method, device, medium and equipment for face key point detection
CN109658360B (en) Image processing method and device, electronic equipment and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant