WO2023005659A1 - 图像处理方法及装置、电子设备、计算机可读存储介质、计算机程序及计算机程序产品 - Google Patents

图像处理方法及装置、电子设备、计算机可读存储介质、计算机程序及计算机程序产品 Download PDF

Info

Publication number
WO2023005659A1
WO2023005659A1 PCT/CN2022/105205 CN2022105205W WO2023005659A1 WO 2023005659 A1 WO2023005659 A1 WO 2023005659A1 CN 2022105205 W CN2022105205 W CN 2022105205W WO 2023005659 A1 WO2023005659 A1 WO 2023005659A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
camera
scene
trajectory
track
Prior art date
Application number
PCT/CN2022/105205
Other languages
English (en)
French (fr)
Inventor
周俊竹
陈小健
李�杰
王磁州
Original Assignee
上海商汤智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海商汤智能科技有限公司 filed Critical 上海商汤智能科技有限公司
Publication of WO2023005659A1 publication Critical patent/WO2023005659A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/292Multi-camera tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0007Image acquisition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Definitions

  • the present disclosure relates to the technical field of image processing, and in particular to an image processing method and device, electronic equipment, computer readable storage medium, computer program and computer program product.
  • the position of the target object in the scene can be more intuitively reflected. Before visualization, it is necessary to locate the target object in the visualization data of the scene.
  • the traditional method uses the map of the scene as the visualized data of the scene, and determines the position of the target object in the map of the scene, so as to realize the positioning of the target object in the visualized data of the scene.
  • the positioning accuracy of the traditional method is not high.
  • the present disclosure provides an image processing method and device, electronic equipment, a computer-readable storage medium, a computer program and a computer program product.
  • an image processing method comprising:
  • a position of the target object in the first image is determined according to the first position.
  • said acquiring the second image of the target object includes:
  • An image containing the object in the blacklist library is determined from at least one third image as the second image, and the at least one third image is captured by the at least one camera.
  • the method also includes:
  • a first trajectory of the target object is displayed in the first image according to the first position and the second position, the first trajectory including a drivable trajectory.
  • the method before displaying the first track of the target object in the first image according to the first position and the second position, the method further includes:
  • the displaying the first track of the target object in the first image according to the first position and the second position includes:
  • the target track is determined to be the first track.
  • the at least one second trajectory includes a third trajectory
  • the acquiring the at least one second trajectory in the first image includes:
  • a trajectory setting page In the case of detecting a trajectory setting instruction for the first image, display a trajectory setting page, the trajectory setting page including at least one camera in the first image and the first scene; the first scene The at least one camera in includes a second camera and a third camera different from the second camera;
  • the trajectory input for the start trajectory point and the end trajectory point is used as the third trajectory.
  • the at least one camera further includes a fourth camera, the fourth camera is different from the second camera, and the fourth camera is different from the third camera;
  • the method further includes:
  • the track input for the starting track point and the ending track point, as the third track includes:
  • the trajectory input for the initial trajectory point, the intermediate trajectory point and the termination trajectory point is used as the third trajectory, and the endpoints of the third trajectory are the initial trajectory point and the termination trajectory point respectively.
  • trajectory points, and the third trajectory includes the intermediate trajectory points.
  • the method further includes:
  • the start track point and the end track point are displayed in the first image in a first preset display manner.
  • the method also includes:
  • the thumbnail of the first image and the thumbnail of the fourth image are displayed in a second preset display manner.
  • the thumbnail of the first image and the thumbnail of the fourth image are displayed in a first area of the display page, and the display page further includes a second image different from the first area. Second area;
  • the method also includes:
  • the fourth image is displayed in the second area.
  • the method also includes:
  • a thumbnail of the first scene is obtained based on the first image and the fourth image.
  • the acquiring the position information of at least one camera in the first scene in the first image includes:
  • At least one camera in the first scene and the first image are displayed on the mark page;
  • a movement instruction for moving the at least one camera to the first image is detected, according to the movement instruction, obtain position information of the at least one camera in the first scene in the first image .
  • the acquiring the position information of at least one camera in the first scene in the first image includes:
  • the position information of the at least one camera in the first image is obtained.
  • the method further includes:
  • the target camera is displayed in the first image in the first preset display manner.
  • the annotation page further includes at least one of the following information: whether the at least one camera has been marked in the first image, the name of the at least one camera, the name of the at least one camera type, the preview button of the at least one camera, and the detailed information viewing button of the at least one camera.
  • At least one of the following information is displayed: the name of the first scene, the access of the at least one camera state, and the moving direction of the object captured by the at least one camera.
  • the image captured by the at least one camera is displayed.
  • the first image is a bird's-eye view of the first scene.
  • an image processing device comprising:
  • An acquisition part configured to acquire a first image of a first scene and position information of at least one camera in the first scene in the first image, the first scene being a map missing scene;
  • the acquisition part is configured to acquire a second image of the target object, the second image is acquired by the first camera in the first scene;
  • a determining part configured to determine a first position of the first camera in the first image according to position information of the at least one camera in the first image
  • the determining part is further configured to determine the position of the target object in the first image according to the first position.
  • the obtaining part is configured to obtain a blacklist library; determine an image containing an object in the blacklist library from at least one third image as the second image, The at least one third image is captured by the at least one camera.
  • the acquisition part is further configured to acquire a second position of the target object in the first image, the second position is different from the first position; according to the the first position and the second position, and a first trajectory of the target object is displayed in the first image, and the first trajectory includes a drivable trajectory.
  • the acquisition part is further configured to display the first track of the target object in the first image according to the first position and the second position Before, at least one second trajectory in the first image is acquired, and the at least one second trajectory includes the drivable trajectory;
  • the determining part is configured to determine the target trajectory as the first trajectory in a case where the trajectory points of the target trajectory in the at least one second trajectory include the first position and the second position.
  • the at least one second track includes a third track
  • the acquisition part is configured to display a track setting page when a track setting instruction for the first image is detected
  • the track setting page includes the first image and at least one camera in the first scene
  • the at least one camera in the first scene includes a second camera and a third camera different from the second camera
  • In the case of detecting an instruction to use the second camera as a starting track point use the position of the second camera in the first image as a starting track point
  • the position of the third camera in the first image is used as the end track point
  • the track input for the start track point and the end track point is used as the third track.
  • the at least one camera further includes a fourth camera, the fourth camera is different from the second camera, and the fourth camera is different from the third camera;
  • the determining part is further configured to detect that the fourth camera is used as an intermediate track point before the track input for the start track point and the end track point is used as the third track. In the case of an instruction, use the position of the fourth camera in the first image as an intermediate track point;
  • the acquisition part is configured to use the trajectory input for the start trajectory point, the intermediate trajectory point and the end trajectory point as the third trajectory, and the end points of the third trajectory are respectively the a start track point and the end track point, and the third track includes the intermediate track point.
  • the image processing device further includes: a display part configured to display the first image in the first preset display mode after the third trajectory is obtained. The starting track point and the ending track point.
  • the acquiring part is further configured to acquire a fourth image of a second scene, the first scene including the second scene;
  • the display part is further configured to display a thumbnail of the first image and a thumbnail of the fourth image in a second preset display manner.
  • the thumbnail of the first image and the thumbnail of the fourth image are displayed in a first area of the display page, and the display page further includes a second image different from the first area. Second area;
  • the display part is further configured to display the fourth image in the second area in the case of detecting that the thumbnail of the fourth image is clicked.
  • the acquisition part is further configured to acquire a fourth image of a second scene, the first scene includes the second scene; based on the first image and the fourth image to get a thumbnail of the first scene.
  • the acquiring part is further configured to display at least one camera in the first scene and the first image on the labeling page if an instruction to mark the position of the camera is detected ; In the case of detecting a movement instruction to move the at least one camera to the first image, according to the movement instruction, obtain the position of at least one camera in the first scene in the first image information.
  • the acquiring part is further configured to display at least one camera in the first scene on the labeling page if an instruction to mark the position of the camera is detected;
  • the display part is further configured to display a position input box when a position input instruction for the at least one camera is detected;
  • the obtaining part is further configured to obtain the position information of the at least one camera in the first image according to the position in the position input box.
  • the display part is further configured to, in the case of detecting that a target camera in the at least one camera is clicked, display the first image in the first preset display manner The target camera is displayed in .
  • the annotation page further includes at least one of the following information: whether the at least one camera has been marked in the first image, the name of the at least one camera, the name of the at least one camera type, the preview button of the at least one camera, and the detailed information viewing button of the at least one camera.
  • the display part is further configured to display at least one of the following information when it is detected that the detailed information viewing button of the at least one camera is clicked: the name of the first scene , the access state of the at least one camera, and the moving direction of the object photographed by the at least one camera.
  • the display part is further configured to display the image captured by the at least one camera when it is detected that the preview button of the at least one camera is clicked.
  • the first image is a bird's-eye view of the first scene.
  • an electronic device including: a processor and a memory, the memory is used to store computer program codes, the computer program codes include computer instructions, and when the processor executes the computer instructions , the electronic device executes the method according to the foregoing first aspect and any possible implementation manner thereof.
  • another electronic device including: a processor, a sending device, an input device, an output device, and a memory, the memory is used to store computer program codes, the computer program codes include computer instructions, and in the When the processor executes the computer instructions, the electronic device executes the method according to the above first aspect and any possible implementation manner thereof.
  • a computer-readable storage medium is provided.
  • a computer program is stored in the computer-readable storage medium, and the computer program includes program instructions.
  • the program instructions are executed by a processor, the The processor executes the method according to the above first aspect and any possible implementation manner thereof.
  • a computer program product includes a computer program or an instruction, and when the computer program or instruction is run on a computer, the computer executes the first aspect and any of the above-mentioned aspects.
  • a computer program including computer readable code, when the computer readable code is run in an electronic device, a processor in the computer device executes the program to implement the above first aspect and any of the above A method of one possible implementation.
  • FIG. 1 is a schematic diagram of an image processing system architecture provided by an embodiment of the present disclosure
  • FIG. 2 is a schematic flowchart of an image processing method provided by an embodiment of the present disclosure
  • FIG. 3 is a schematic diagram of a map provided by an embodiment of the present disclosure.
  • FIG. 4 is a schematic diagram of a map missing scene provided by an embodiment of the present disclosure.
  • FIG. 5 is a schematic diagram of a pixel coordinate system provided by an embodiment of the present disclosure.
  • FIG. 6 is a schematic diagram of another map missing scene provided by an embodiment of the present disclosure.
  • FIG. 7 is a schematic diagram of displaying a thumbnail of a first image and a thumbnail of a second image in a second preset display mode provided by an embodiment of the present disclosure
  • FIG. 8 is a schematic diagram of a display page provided by an embodiment of the present disclosure.
  • FIG. 9 is a schematic diagram of a track setting page provided by an embodiment of the present disclosure.
  • FIG. 10 is a schematic diagram of displaying a first image and a fourth image in a second preset display mode provided by an embodiment of the present disclosure
  • FIG. 11 is a schematic structural diagram of an image processing device provided by an embodiment of the present disclosure.
  • Fig. 12 is a schematic diagram of a hardware structure of an electronic device provided by an embodiment of the present disclosure.
  • At least one (item) means one or more, “multiple” means two or more, and “at least two (items)” means two or more Three or more, the character “/" can indicate that the associated objects are an "or” relationship, which refers to any combination of these items, including any combination of single items (items) or plural items (items).
  • at least one item (piece) of a, b or c can mean: a, b, c, "a and b", “a and c", “b and c”, or "a and b and c ", where a, b, c can be single or multiple.
  • an embodiment means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the present disclosure.
  • the occurrences of this phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is understood explicitly and implicitly by those skilled in the art that the embodiments described herein can be combined with other embodiments.
  • the position of the target object in the scene can be more intuitively reflected. Before visualization, it is necessary to locate the target object in the visualization data of the scene.
  • the traditional method uses the map of the scene as the visualized data of the scene, and determines the position of the target object in the map of the scene, so as to realize the positioning of the target object in the visualized data of the scene.
  • the positioning accuracy of the traditional method is not high, wherein the detailed information includes at least one of the following: building information and road information.
  • the execution subject of the embodiments of the present disclosure is an image processing apparatus, where the image processing apparatus may be any electronic device capable of executing the technical solutions disclosed in the method embodiments of the present disclosure.
  • the image processing device may be one of the following: a mobile phone, a computer, a tablet computer, and a wearable smart device.
  • FIG. 1 is a schematic structural diagram of an image processing system 11 provided by an embodiment of the present disclosure.
  • the image processing device 112 may be a server.
  • At least one camera 111 and an image processing device 112 may be deployed in a map missing scene.
  • the map missing scene is a building in a supervision area, for example, a closed park.
  • At least one camera 111 is used for at least one of images and videos inside the building.
  • the image processing device 112 processes at least one of the images and videos collected by the at least one camera 111 based on the technical solution provided below, and determines the position of the target object in the building.
  • at least one camera 111 includes a first camera.
  • the image processing device acquires a first image including a scene in a building, and a second image acquired by a first camera, wherein the second image includes a target object.
  • the image processing device 112 can determine the position of the target object in the first image based on the technical solutions provided below, that is, the position of the target object in the building scene.
  • FIG. 2 is a schematic flowchart of an image processing method provided by an embodiment of the present disclosure.
  • the map missing scene includes an area missing detailed information on the map, where the detailed information includes at least one of the following: building information and road information.
  • the first scene includes at least one of the following: supervision places, subway stations.
  • the at least one camera in the first scene is at least one camera whose shooting range is located in the first scene.
  • the at least one camera may be one camera, or may be two or more cameras.
  • the position in the image may be the position under the pixel coordinates of the image, wherein the abscissa of the pixel coordinate system is used to represent the number of columns where the pixel point is located, and the ordinate in the pixel coordinate system is used to represent the pixel point The number of rows it is on.
  • the pixel coordinates are constructed with the upper left corner of the image as the coordinate origin O, the direction parallel to the row of the image as the direction of the X axis, and the direction parallel to the column of the image as the direction of the Y axis.
  • the system is XOY.
  • the units of the abscissa and ordinate are pixels.
  • the coordinates of pixel A11 in Fig. 5 are (1,1)
  • the coordinates of pixel A23 are (3,2)
  • the coordinates of pixel A42 are (2,4)
  • the coordinates of pixel A34 are (4 , 3).
  • the position of the at least one camera in the first image may be determined according to the position information of the at least one camera in the first image.
  • the position of the camera in the first image is the position of the camera in the pixel coordinate system of the first image, and the position corresponds to the position of the camera in the first scene.
  • camera a is installed at point A in the first scene. If in the first image, the pixel corresponding to point A is pixel point B. Then, the position of the pixel point B in the first image is the position of the camera a in the first image.
  • the image processing apparatus receives the first image input by the user through the input component.
  • the image processing apparatus receives the first image sent by the terminal.
  • the image processing apparatus receives the first image input by the user through the input component.
  • the image processing apparatus receives the first image sent by the terminal.
  • the step of acquiring the first image and the step of acquiring the position information of at least one camera in the first image may be performed separately, or may be performed simultaneously.
  • the image processing apparatus may acquire the first image first, and then acquire position information of at least one camera in the first image.
  • the image processing device may first acquire the position information of at least one camera in the first image, and then acquire the first image.
  • the image processing apparatus acquires position information of at least one camera in the first image during the process of acquiring the first image, or acquires the first image during the process of acquiring the position information of at least one camera in the first image.
  • the target object may be any object.
  • the target object includes one of the following: a human body, a human face, and a vehicle.
  • the second image of the target object that is, the second image includes the target object.
  • the first camera is any one of the above at least one camera.
  • the image processing device receives the second image input by the user through the input component.
  • the image processing apparatus receives the second image sent by the terminal.
  • the second image contains the target object
  • the second image is collected by the first camera, and the position of the target object in the first image can be obtained according to the position of the first camera in the first image.
  • the image processing apparatus uses the first position as the position in the first image.
  • the image processing device can determine the position where Xiao Ming appeared in the first image.
  • the image processing device may determine a pixel point corresponding to the first position from the first image, and display the pixel point in the first image.
  • the image processing apparatus displays the first camera at the first position.
  • the image processing device displays the second image at the first position.
  • the image processing device determines the position of the target object in the first image according to the position of the first camera that captures the second image of the target object in the first image, thereby improving the accuracy of positioning in the first scene Spend.
  • the image processing device determines the position of the target object in the first image, it displays the position of the target object in the first image in the first image. In this way, the position where the target object appeared in the first scene can be reflected in the first image, so as to realize the visual display of the position of the target object in the first scene.
  • the image processing device acquires the second image of the target object by performing the following steps:
  • the blacklist library includes the face images of the searched object.
  • the blacklist library contains Zhang San's face image, and at this time, the search object is Zhang San.
  • the face image of the search object at B place can be stored in the blacklist library.
  • the image processing apparatus receives the blacklist library input by a user through an input component.
  • the image processing apparatus receives the blacklist library sent by the terminal.
  • the third image is captured by at least one camera in the first scene, and if the object in the blacklist library appears in the third image, it means that the object in the blacklist library appears in the first scene. At this time, according to at least one third image containing the object in the blacklist library, the position of the object in the blacklist library in the first scene can be determined.
  • the image processing device determines that at least one third image contains faces in the blacklist library by comparing at least one third image with face images in the blacklist library.
  • the image of the object as the second image.
  • steps 201 to 204 can be combined to determine the positions of the objects in the blacklist library in the first image.
  • the blacklist database includes Zhang San's face image and Li Si's face image.
  • the at least one third image includes a third image a and a third image b.
  • the image processing device determines that the third image a includes Zhang San by comparing the third image a with the face image of Zhang San, and then uses the third image a as the second image. Combining the technical solutions of steps 201 to 204, the position of Zhang San in the first image can be further determined.
  • step 1 and step 2 the image processing device determines whether the object in the blacklist library appears in the first scene by determining whether at least one third image contains the object in the blacklist library.
  • the position of the object in the first image can be further determined by combining steps 201 to 204 .
  • the image processing device also performs the following steps:
  • the timestamp of the first location is different from the timestamp of the second location.
  • the image processing apparatus receives the second location input by the user through the input component.
  • the image processing apparatus receives the second location sent by the terminal.
  • the first position and the second position display a first trajectory of the target object in the first image, where the first trajectory includes a trajectory on the road.
  • the drivable trajectories include drivable trajectories for people or vehicles.
  • the drivable trajectory when the target object includes a person, the drivable trajectory includes a drivable trajectory for a person; when the target object includes a car, the drivable trajectory includes a drivable trajectory for a car.
  • the trajectory through the wall is not the trajectory that people can drive.
  • the track passing through the building is not the track that a car can drive.
  • the car can drive on the road, so the track on the road is the track that the car can drive.
  • the first image includes at least one road in the first scene, that is, the first image may show at least one road in the first scene.
  • Fig. 6 shows an image of the scene where the Ping An Financial Center is located, and the image may show the first road and the second road.
  • the drivable trajectory includes at least one trajectory on the road.
  • the image processing device takes the first position and the second position as two end points respectively, and determines a track passing through the first position and the second position as the first track.
  • the trajectory determined based on the position may include unreasonable trajectories.
  • a trajectory through a wall is unreasonable for a human.
  • the first trajectory in the embodiment of the present disclosure includes a drivable trajectory, thereby improving the accuracy of the trajectory of the target object.
  • the image processing device before performing step 4, the image processing device further performs the following steps:
  • the image processing device receives at least one second trajectory input by a user through an input component.
  • the image processing apparatus receives at least one second track sent by the terminal.
  • Step 5 the image processing device executes the following steps during Step 5:
  • the track point of the target track includes the first position and the second position, that is, the target track passes the first position and the second position.
  • the point corresponding to the first position and the point corresponding to the second position are two endpoints of the target trajectory.
  • the point corresponding to the first position is the starting point of the target trajectory
  • the point corresponding to the second position is the ending point of the target trajectory
  • the point corresponding to the second position is the starting point of the target trajectory
  • the point corresponding to the first position is the ending point of the target trajectory
  • the image processing device may determine a trajectory whose trajectory points include the first location and the second location from the at least one second trajectory as the first trajectory.
  • the at least one second track includes a third track, that is, the third track is any track in the at least one second track.
  • the image processing device performs the following steps in the process of executing step 4:
  • the track setting page includes the first image and at least one camera in the first scene, and at least one camera in the first scene
  • One camera includes a second camera and a third camera different from the second camera.
  • the trajectory setting instruction may be input by the user to the image processing device through the input component, and the trajectory setting instruction may also be sent by the user to the image processing device through the terminal.
  • the track setting instruction is used to instruct the image processing device to enter the track setting program.
  • the image processing device When the image processing device detects a trajectory setting instruction for the first image, it displays a trajectory setting page, and displays the first image and at least one camera in the first scene on the trajectory setting page.
  • At least one camera in the first scene includes a second camera and a third camera different from the second camera. That is, in this step, the first scene includes two or more cameras, and the second camera and the third camera are any two different cameras in the first scene.
  • the image processing device displays "Please select the starting track point" on the track setting page, and the user then clicks on the second camera to input an instruction to the image processing device to use the second camera as the starting track point .
  • the image processing device displays "Please input the starting track point" on the track setting page, and the user further inputs the name of the second camera to input the second camera as the starting track point to the image processing device. point instruction.
  • the image processing device displays "Please select the end track point" on the track setting page, and the user then clicks on the third camera to input an instruction to the image processing device to use the third camera as the starting track point.
  • the image processing device displays "Please enter the ending track point" on the track setting page, and the user then inputs the name of the third camera to the image processing device to use the third camera as the starting track point instructions.
  • the trajectory determined by the image processing device based on the start trajectory point and the end trajectory point may not be a drivable trajectory. And the user can determine the drivable trajectory between the starting trajectory point and the ending trajectory point by displaying the first image displayed on the page. Therefore, the image processing device uses the trajectory input for the start trajectory point and the end trajectory point as the third trajectory.
  • the image processing device displays a track setting page through a touch screen.
  • the user connects the starting track point and the ending track point by touching the display screen, and constructs a track whose end points are the starting track point and the ending track point, so as to input the track for the starting track point and the ending track point to the image processing device.
  • the third trajectory in steps 7 to 10 is only an example, and it should not be understood that only one trajectory of the at least one second trajectory can be obtained through steps 7 to 10.
  • the image processing device can obtain all second trajectories by performing steps 7 to 10 .
  • the at least one camera further includes a fourth camera, the fourth camera is different from the second camera, and the fourth camera is different from the third camera.
  • the image processing device Before performing step 10, the image processing device also performs the following steps:
  • the image processing device displays "Please select an intermediate track point" on the track setting page, and the user clicks on the fourth camera to input an instruction to use the fourth camera as the starting track point to the image processing device.
  • the image processing device displays "Please enter the middle track point" on the track setting page, and the user further inputs the name of the fourth camera to the image processing device to use the fourth camera as the starting track point instructions.
  • the image processing device After performing step 11, the image processing device obtains the third trajectory by performing the following steps:
  • the trajectory input for the above-mentioned initial trajectory point, the above-mentioned intermediate trajectory point and the above-mentioned end trajectory point is used as the above-mentioned third trajectory, and the endpoints of the above-mentioned third trajectory are the above-mentioned initial trajectory point and the above-mentioned end trajectory point respectively, and the above-mentioned The third trajectory includes the above-mentioned intermediate trajectory points.
  • the image processing device displays a track setting page through a touch screen.
  • the user connects the start track point, the middle track point and the end track point by touching the display screen, and constructs a track whose endpoints are the start track point and the end track point respectively and passes through the middle track point, so as to input the starting track point to the image processing device. , the trajectory of the intermediate trajectory point and the terminal trajectory point.
  • the second camera, the third camera and the fourth camera are all examples, and it should not be understood that the first scene only includes three cameras.
  • the image processing device By performing step 11 and step 12, the image processing device obtains the trajectory whose endpoints are the starting trajectory point and the ending trajectory point, so that the trajectory information of the third trajectory can be made more detailed.
  • the image processing device further performs the following steps:
  • the first preset display mode may be emphatic display, that is, in step 13, the image processing device emphatically displays the start track point and the end track point in the first image. Emphasis on displaying the start track point and the end track point in the first image means that the display mode of the start track point and the end track point are different from the display mode of the non-emphasis display area, wherein the non-emphasis display area includes the first image The area in excluding the start track point and the end track point.
  • the first preset display manner includes one or more of the following: color highlighting, highlighting, and hovering.
  • the first preset display manner includes color highlighting.
  • the image processing device converts the non-emphasized display area into a grayscale image, and retains the color of the starting track point and the color of the ending track point, so as to realize highlighting the starting track point and the ending track point.
  • the first preset display manner includes highlighting.
  • the image processing device highlights the start track point and the end track point, so as to realize highlighting the start track point and the end track point.
  • the first preset display manner includes floating display.
  • the first image includes a head up display (HUD) layer, the image processing device determines the first display area corresponding to the starting track point from the HUD layer, and determines the second display area corresponding to the ending track point from the HUD layer area.
  • the first display area and the second display area are used as floating display areas, and the start track point is displayed in the first display area, and the end track point is displayed in the second display area.
  • HUD head up display
  • the image processing device can enable the user to more intuitively determine the third trajectory from the first image by displaying the start trajectory point and the end trajectory point in the first image in the first preset display manner.
  • the image processing device also performs the following steps:
  • the first scene and the second scene are affiliation relationships.
  • the first scene is a campus
  • the second scene is a teaching building in the campus.
  • the first scene is the teaching building
  • the second scene is the third floor in the teaching building.
  • the first scene is the third floor of the teaching building
  • the second scene is classroom 308 on the third floor.
  • the second scenario includes a map missing scenario.
  • the image processing apparatus receives the fourth image input by the user through the input component.
  • the image processing apparatus receives the fourth image sent by the terminal.
  • the second preset display mode may be displayed in a hierarchical manner, that is, in step 15, the image processing apparatus displays the thumbnails of the first image and the thumbnails of the fourth image in a hierarchical manner. For example, in FIG. 7, a thumbnail of the first image and a thumbnail of the fourth image.
  • the image processing device can simulate the 3D model of the first scene, obtain a 3D display effect, and better display the hierarchical relationship between the first scene and the second scene.
  • the thumbnail of the first image and the thumbnail of the fourth image are displayed in a first area of the display page, and the display page further includes a second area different from the first area.
  • the image processing device also performs the following steps:
  • the image processing device can simulate the 3D model of the first scene, and the 3D model is displayed in the first area. And by clicking any thumbnail of the three-dimensional model in the first area, the user can make the image processing device display the scene image corresponding to the thumbnail in the second area. Therefore, the image processing device displays the fourth image in the second area when detecting that the thumbnail of the fourth image is clicked.
  • the image processing device converts the image displayed in the second area to Switch from the fourth image to the first image.
  • the image shown in FIG. 8 is the display page in step 15 .
  • Fig. 9 shows the track setting page.
  • the left side of the track display page is the scene information display area, which is used to display scene information.
  • the scene information display areas include: factory, clinic, surveillance area 3, surveillance area 4, surveillance area 1, surveillance area 2, and hospital.
  • the middle of the track display page is the image display area, which is used to display images. Refer to FIG. 8 for the image display area shown in FIG. 9 .
  • the right side of the track display page is the camera information display area, which is used to display the camera information.
  • the image processing device also performs the following steps:
  • step 14 For the implementation of this step, refer to step 14.
  • the image processing apparatus obtains the thumbnail of the first image based on the first image, and obtains the thumbnail of the fourth image based on the fourth image.
  • the image processing apparatus obtains the thumbnail of the first image based on the first image, and obtains the thumbnail of the fourth image based on the fourth image.
  • the thumbnail of the first image and the thumbnail of the fourth image are displayed hierarchically to obtain the thumbnail of the first scene.
  • the image processing device acquires position information of at least one camera in the first scene in the first image by performing the following steps:
  • the instruction to mark the position of the camera may be that the user controls the cursor with a mouse and clicks a mark button on the displayed page.
  • the command to mark the position of the camera may also be that the user inputs the mark command to the image processing device through the input component, for example, the user inputs the character "start” to the image processing device through the keyboard, and inputs the mark command to the image processing device.
  • the user inputs a labeling instruction to the image processing device by inputting voice data "start labeling the camera position" to the image processing device.
  • the image processing apparatus may display at least one camera in the first scene on the annotation page by displaying an identification of at least one camera in the first scene on the annotation page, where the identification may be One or more of the following: icons, digital logos, text logos.
  • the annotation page of the image processing device includes the following areas: a camera display area and an image display area, wherein the camera display area is used to display at least one camera in the first scene, and the camera display area is used to display images of the first scene (i.e. the first image).
  • the first scene is a surveillance area
  • there are three surveillance cameras installed in the surveillance area namely surveillance camera a, surveillance camera b, and surveillance camera c.
  • the logo of surveillance camera a, the logo of surveillance camera b and the logo of surveillance camera c can be displayed in the camera display area.
  • the image of the supervision area can be displayed in the image display area.
  • the user inputs a moving instruction to the image processing apparatus by dragging the camera in the first scene to the first image.
  • At least one camera includes camera a and camera b, and the user drags camera a to point A in the first image, and drags camera b to point B in the first image.
  • the moving instruction includes moving camera a to point A in the first image, and moving camera b to point B in the first image.
  • the user inputs a movement instruction to the image processing apparatus through a keyboard, where the keyboard includes a physical keyboard and a virtual keyboard.
  • At least one camera includes camera a and camera b
  • the user inputs to the image processing device through a physical keyboard to move camera a to point A in the first image and move camera b to point B in the first image.
  • the moving instruction includes moving camera a to point A in the first image, and moving camera b to point B in the first image.
  • position information of at least one camera in the first scene in the first image can be determined. Therefore, the position information of at least one camera in the first scene in the first image can be obtained according to the movement instruction.
  • At least one camera includes camera a
  • the moving instruction includes moving camera a to point A in the first image.
  • the image processing device determines the position of camera a in the first image as point A according to the movement instruction, and then determines the position information of camera a in the first image according to the position of point A in the first image.
  • the image processing device acquires position information of at least one camera in the first scene in the first image by performing the following steps:
  • the instruction to mark the position of the camera may be that the user controls the cursor with a mouse and clicks a mark button on the displayed page.
  • the command to mark the position of the camera may also be that the user inputs the mark command to the image processing device through the input component, for example, the user inputs the character "start” to the image processing device through the keyboard, and inputs the mark command to the image processing device.
  • the user inputs a labeling instruction to the image processing device by inputting voice data "start labeling the camera position" to the image processing device.
  • the image processing apparatus may display at least one camera in the first scene on the annotation page by displaying an identification of at least one camera in the first scene on the annotation page, where the identification may be One or more of the following: icons, digital logos, text logos.
  • the position input instruction may be input by the user to the image processing apparatus through the input component, and the position input instruction may be sent by the user to the image processing apparatus through the terminal.
  • the input box is used to input a position, and the position in the input box represents the position in the first image. For example, if the user inputs (3, 4) in the input box, then the coordinates of the position representation in the pixel coordinate system of the first image are (3, 4).
  • At least one camera includes camera a and camera b.
  • the user inputs a position input command for the camera a to the image processing device, so that the image processing device displays a position input box. If the position input by the user in the position input box is position A, then the image processing device uses position A as the position of camera a in the first image.
  • the position information of the at least one camera in the first image is obtained.
  • the image processing device obtains position information of the camera in the first image according to the position in the input frame corresponding to the position of the camera in the first scene.
  • At least one camera includes camera a and camera b.
  • the user inputs a position input command for the camera a to the image processing device, so that the image processing device displays a position input box. If the position input by the user in the position input box is position A, then the image processing device uses position A as the position of camera a in the first image. After inputting the position A, the user further inputs a position input command for the camera b to the image processing device, so that the image processing device displays a position input box. If the position input by the user in the position input box is position B, then the image processing device uses position B as the position of camera b in the first image.
  • the image processing device after obtaining the position information of at least one camera in the first image, the image processing device further performs the following steps:
  • the user clicks on the target camera in the annotation page, and the image processing device highlights the target camera in the first image.
  • the annotation page further includes at least one of the following information: whether at least one camera has been marked in the first image, at least one camera name, at least one camera type, at least one camera preview button, at least A camera details view button.
  • whether to mark in the first image is used to indicate whether the camera has been marked in the first image, that is, whether to determine the position information of the camera in the first image through steps 19 to 20, or to determine whether the camera is in the first image through steps 21 to 20
  • Step 23 determines the position information of the camera in the first image.
  • the types of cameras include dome cameras, bolt cameras, and dome cameras. Click the preview button of the camera to preview the images captured by the camera. You can view the detailed information of the camera by clicking the button to view the detailed information of the camera.
  • At least one of the following information is displayed: the name of the first scene, the access status of at least one camera, the The direction of movement of the subject being photographed.
  • the access state of the camera indicates whether there is a communication connection between the image processing device and the camera.
  • At least one camera includes camera a. If there is no communication connection between camera a and the image processing device, the access state of camera a is abnormal; if there is a communication connection between camera b and the image processing device, the connection state of camera b is normal.
  • the moving direction of the object captured by the camera includes: entering or exiting.
  • At least one camera includes camera a.
  • Camera a is installed at the gate of building A. If the moving direction of the objects captured by camera a includes in, then all the objects captured by camera a enter building A; if the moving direction of the objects captured by camera b includes out, then the objects captured by camera a are all Leave Building A.
  • a preview button of at least one camera when it is detected that a preview button of at least one camera is clicked, a captured image of at least one camera is displayed.
  • At least one camera includes a camera a and a camera b
  • the image processing device displays a captured image of the camera a when a preview button of the camera a is detected to be clicked.
  • the image processing device detects that the preview button of camera b is clicked, it displays the captured image of camera b.
  • the first image is a bird's-eye view of the first scene.
  • the first image is a bird's-eye view of the first scene
  • the first image contains more detailed information of the first scene.
  • the image processing device may display the position of the target object in the first image and visually display the position where the target object appeared in the first scene, so as to improve the display effect.
  • FIG. 11 is a schematic structural diagram of an image processing device provided by an embodiment of the present disclosure.
  • the image processing device 2 includes: an acquisition part 21 and a determination part 22 .
  • the image processing device 2 further includes a display part 23 . in:
  • the acquiring part 21 is configured to acquire a first image of a first scene and position information of at least one camera in the first scene in the first image, the first scene being a map missing scene;
  • the acquisition part 21 is also configured to acquire a second image of the target object, the second image is acquired by the first camera in the first scene;
  • the determining part 22 is configured to determine a first position of the first camera in the first image according to position information of the at least one camera in the first image;
  • the determining part 22 is further configured to determine the position of the target object in the first image according to the first position.
  • the acquiring part 21 is further configured to acquire a blacklist library; determine an image containing an object in the blacklist library from at least one third image as the second image , the at least one third image is captured by the at least one camera.
  • the acquiring part 21 is further configured to acquire a second position of the target object in the first image, where the second position is different from the first position; according to In the first position and the second position, a first trajectory of the target object is displayed in the first image, and the first trajectory includes a drivable trajectory.
  • the acquisition part 21 is further configured to display the first image of the target object in the first image according to the first position and the second position.
  • the at least one second trajectory in the first image is acquired, and the at least one second trajectory includes the drivable trajectory; the trajectory points of the target trajectory in the at least one second trajectory include the first In the case of a first location and the second location, the target trajectory is determined as the first trajectory.
  • the at least one second track includes a third track
  • the acquisition part 21 is further configured to display the track when a track setting instruction for the first image is detected
  • the track setting page includes the first image and at least one camera in the first scene
  • the at least one camera in the first scene includes a second camera and a second camera different from the second camera Three cameras; in the case of detecting an instruction to use the second camera as a starting track point, use the position of the second camera in the first image as a starting track point;
  • the third camera is used as an instruction to terminate the track point
  • the position of the third camera in the first image is used as the end track point; the track input for the initial track point and the end track point , as the third trajectory.
  • the at least one camera further includes a fourth camera, the fourth camera is different from the second camera, and the fourth camera is different from the third camera;
  • the determining part 22 is further configured to, before taking the track input for the start track point and the end track point as the third track, before detecting that the fourth camera is used as an intermediate track point In the case of an instruction, use the position of the fourth camera in the first image as an intermediate track point;
  • the acquisition part 21 is also configured to use the trajectory input for the initial trajectory point, the intermediate trajectory point and the end trajectory point as the third trajectory, and the endpoints of the third trajectory are respectively The starting trajectory point and the ending trajectory point, and the third trajectory includes the intermediate trajectory point.
  • the image processing device 2 further includes: a display part 23 configured to, after the third trajectory is obtained, display in the first image in a first preset display manner The start track point and the end track point are displayed.
  • the acquiring part 21 is further configured to acquire a fourth image of a second scene, the first scene including the second scene;
  • the display part 23 is further configured to display the thumbnail of the first image and the thumbnail of the fourth image in a second preset display manner.
  • the thumbnail of the first image and the thumbnail of the fourth image are displayed in a first area of the display page, and the display page further includes a second image different from the first area. Second area;
  • the display part 23 is further configured to display the fourth image in the second area in the case of detecting that the thumbnail of the fourth image is clicked.
  • the acquisition part 21 is further configured to acquire a fourth image of a second scene, the first scene includes the second scene; based on the first image and the fourth image Four images, get the thumbnail of the first scene.
  • the acquisition part 21 is further configured to display at least one camera in the first scene and the first image; in the case where a movement instruction to move the at least one camera to the first image is detected, according to the movement instruction, the at least one camera in the first scene in the first image is obtained location information.
  • the acquiring part 21 is further configured to display at least one camera in the first scene on the labeling page if an instruction to mark the position of the camera is detected;
  • the display part 23 is further configured to display a position input box when a position input instruction for the at least one camera is detected;
  • the obtaining part 21 is further configured to obtain the position information of the at least one camera in the first image according to the position in the position input box.
  • the display part 23 is further configured to, in the case of detecting that a target camera in the at least one camera is clicked, display the The target camera is shown in the image.
  • the annotation page further includes at least one of the following information: whether the at least one camera has been marked in the first image, the name of the at least one camera, the name of the at least one camera type, the preview button of the at least one camera, and the detailed information viewing button of the at least one camera.
  • the display part 23 is further configured to display at least one of the following information when it is detected that the detailed information viewing button of the at least one camera is clicked: name, the access status of the at least one camera, and the moving direction of the object captured by the at least one camera.
  • the display part 23 is further configured to display the image captured by the at least one camera when it is detected that the preview button of the at least one camera is clicked.
  • the first image is a bird's-eye view of the first scene.
  • the image processing device determines the position of the target object in the first image according to the position of the first camera that captures the second image of the target object in the first image, thereby improving the accuracy of positioning in the first scene Spend.
  • the functions or modules included in the apparatus provided by the embodiments of the present disclosure may be used to execute the methods described in the above method embodiments, and the implementation may refer to the descriptions of the above method embodiments.
  • a "part" may be a part of a circuit, a part of a processor, a part of a program or software, etc., of course it may also be a unit, a module or a non-modular one.
  • Fig. 12 is a schematic diagram of a hardware structure of an electronic device provided by an embodiment of the present disclosure.
  • the electronic device 3 includes a processor 31 , a memory 32 , an input device 33 and an output device 34 .
  • the processor 31 , the memory 32 , the input device 33 and the output device 34 are coupled through connectors, and the connectors include various interfaces, transmission lines or buses, etc., which are not limited in this embodiment of the present disclosure.
  • coupling refers to interconnection in a specific manner, including direct connection or indirect connection through other devices, for example, connection through various interfaces, transmission lines, and buses.
  • the processor 31 may be one or more graphics processing units (graphics processing units, GPUs).
  • the GPU may be a single-core GPU or a multi-core GPU.
  • the processor 31 may be a processor group formed by multiple GPUs, and the multiple processors are coupled to each other through one or more buses.
  • the processor may also be other types of processors, etc., which are not limited in this embodiment of the present disclosure.
  • the memory 32 can be used to store computer program instructions and various computer program codes including program codes for implementing the solutions of the present disclosure.
  • the memory includes but is not limited to random access memory (random access memory, RAM), read-only memory (read-only memory, ROM), erasable programmable read-only memory (erasable programmable read only memory, EPROM ), or portable read-only memory (compact disc read-only memory, CD-ROM), which is used for related instructions and data.
  • the input device 33 is used to input at least one of data and signals, and the output device 34 is used to output at least one of data and signals.
  • the input device 33 and the output device 34 can be independent devices, or an integrated device.
  • the memory 32 can not only be used to store related instructions, but also can be used to store related data, for example, the memory 32 can be used to store the first image obtained through the input device 33, or the memory 32 can also be used to store The first location obtained by the processor 31 is stored, and the embodiment of the present disclosure does not limit the data stored in the memory.
  • Fig. 12 only shows a simplified design of an image processing device.
  • the image processing device may also include other necessary components, including but not limited to any number of input/output devices, processors, memories, etc., and all image processing devices that can implement the embodiments of the present disclosure are included in this within the scope of public protection.
  • An embodiment of the present disclosure also provides a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, the computer program includes program instructions, and when the program instructions are executed by a processor, the The processor executes the image processing method described above.
  • An embodiment of the present disclosure also provides a computer program product, where the computer program product includes a computer program or an instruction, and when the computer program or instruction is run on a computer, causes the computer to execute the above image processing method.
  • an embodiment of the present disclosure also provides a computer program, including computer readable codes.
  • a processor in the computer device executes the above image processing method. .
  • the disclosed systems, devices and methods may be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the parts is only a logical function division. In actual implementation, there may be other division methods.
  • multiple parts or components can be combined or May be integrated into another system, or some features may be ignored, or not implemented.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or parts may be in electrical, mechanical or other forms.
  • the part described as a separate component may or may not be physically separated, and the part displayed as a part may or may not be a physical part, that is, it may be located in one place, or may also be distributed to multiple network parts. Some or all of them can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional part in each embodiment of the present disclosure may be integrated into one processing part, or each part may physically exist separately, or two or more parts may be integrated into one part.
  • all or part of them may be implemented by software, hardware, firmware or any combination thereof.
  • software When implemented using software, it may be implemented in whole or in part in the form of a computer program product.
  • the computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on the computer, all or part of the processes or functions according to the embodiments of the present disclosure will be generated.
  • the computer can be a general purpose computer, a special purpose computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in or transmitted via a computer-readable storage medium.
  • the computer instructions can be sent from a website site, computer, server, or data center via wired (e.g., coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.) Another website site, computer, server or data center for transmission.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer, or a data storage device such as a server or a data center integrated with one or more available media.
  • the usable medium (computer-readable storage medium) may be a tangible device capable of retaining and storing instructions used by the instruction execution device, and may be a volatile storage medium or a nonvolatile storage medium.
  • a computer readable storage medium may be, for example, but is not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of computer-readable storage media include: portable computer diskettes, hard disks, random access memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM), or flash memory), static random access memory (SRAM), compact disc read only memory (CD-ROM), digital versatile disc (DVD), memory stick, floppy disk, mechanically encoded device, such as a printer with instructions stored thereon A hole card or a raised structure in a groove, and any suitable combination of the above.
  • RAM random access memory
  • ROM read-only memory
  • EPROM erasable programmable read-only memory
  • flash memory static random access memory
  • SRAM static random access memory
  • CD-ROM compact disc read only memory
  • DVD digital versatile disc
  • memory stick floppy
  • computer-readable storage media are not to be construed as transient signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (e.g., pulses of light through fiber optic cables), or transmitted electrical signals.
  • the processes can be completed by computer programs to instruct related hardware.
  • the programs can be stored in computer-readable storage media.
  • When the programs are executed may include the processes of the foregoing method embodiments.
  • the aforementioned storage medium includes: various media capable of storing program codes such as read-only memory (ROM) or random access memory (RAM), magnetic disk or optical disk.
  • the embodiment of the present disclosure discloses an image processing method and device, electronic equipment, a computer-readable storage medium, a computer program and a computer program product.
  • the method includes: acquiring a first image of a first scene and position information of at least one camera in the first image in the first scene, where the first scene is a scene without a map; acquiring a second image of the target object image, the second image is acquired by the first camera in the first scene; according to the position information of the at least one camera in the first image, determine that the first camera is in the first image A first position in the first image; determining a position of the target object in the first image according to the first position. Positioning can be achieved when the detailed information of the scene is missing from the map, and the positioning accuracy is high.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

本公开实施例公开了一种图像处理方法及装置、电子设备、计算机可读存储介质、计算机程序及计算机程序产品。该方法包括:获取第一场景的第一图像以及所述第一场景中的至少一个摄像头在所述第一图像中的位置信息,所述第一场景为地图缺失场景;获取目标对象的第二图像,所述第二图像由所述第一场景内的第一摄像头采集得到;依据所述至少一个摄像头在所述第一图像中的位置信息,确定所述第一摄像头在所述第一图像中的第一位置;依据所述第一位置,确定所述目标对象在所述第一图像中的位置。

Description

图像处理方法及装置、电子设备、计算机可读存储介质、计算机程序及计算机程序产品
相关申请的交叉引用
本公开基于申请号为202110874129.5、申请日为2021年07月30日、申请名称为“图像处理方法及装置、电子设备及计算机可读存储介质”的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本公开作为参考。
技术领域
本公开涉及图像处理技术领域,尤其涉及一种图像处理方法及装置、电子设备、计算机可读存储介质、计算机程序及计算机程序产品。
背景技术
通过对目标对象在场景下的位置进行可视化显示,可更直观的体现目标对象在场景中的位置。而在可视化之前,需要在场景的可视化数据中对目标对象进行定位。
传统方法通过将场景的地图作为场景的可视化数据,并确定目标对象在场景的地图中的位置,实现在场景的可视化数据中对目标对象进行定位。但在地图缺失场景的细节信息的情况下,传统方法的定位准确度不高。
发明内容
本公开提供一种图像处理方法及装置、电子设备、计算机可读存储介质、计算机程序及计算机程序产品。
第一方面,提供了一种图像处理方法,所述方法包括:
获取第一场景的第一图像以及所述第一场景中的至少一个摄像头在所述第一图像中的位置信息,所述第一场景为地图缺失场景;
获取目标对象的第二图像,所述第二图像由所述第一场景内的第一摄像头采集得到;
依据所述至少一个摄像头在所述第一图像中的位置信息,确定所述第一摄像头在所述第一图像中的第一位置;
依据所述第一位置,确定所述目标对象在所述第一图像中的位置。
在本公开的一些实施例中,所述获取目标对象的第二图像,包括:
获取黑名单库;
从至少一张第三图像中确定包含所述黑名单库中的对象的图像,作为所述第二图像,所述至少一张第三图像由所述至少一个摄像头采集得到。
在本公开的一些实施例中,所述方法还包括:
获取所述目标对象在所述第一图像中的第二位置,所述第二位置与所述第一位置不同;
依据所述第一位置和所述第二位置,在所述第一图像中显示所述目标对象的第一轨迹,所述第一轨迹包括可行驶轨迹。
在本公开的一些实施例中,所述依据所述第一位置和所述第二位置,在所述第一图像中显示所述目标对象的第一轨迹之前,所述方法还包括:
获取所述第一图像中的至少一条第二轨迹,所述至少一条第二轨迹均包括所述可行驶轨迹;
所述依据所述第一位置和所述第二位置,在所述第一图像中显示所述目标对象的第一轨迹,包括:
在所述至少一条第二轨迹中目标轨迹的轨迹点包括所述第一位置和所述第二位置的情况下,确定所述目标轨迹为所述第一轨迹。
在本公开的一些实施例中,所述至少一条第二轨迹包括第三轨迹,所述获取所述第一图像中的至少一条第二轨迹,包括:
在检测到针对所述第一图像的轨迹设置指令的情况下,显示轨迹设置页面,所述轨迹设置页面包括所述第一图像和所述第一场景内的至少一个摄像头;所述第一场景内的至少一个摄像头包括第二摄像头和与所述第二摄像头不同的第三摄像头;
在检测到将所述第二摄像头作为起始轨迹点的指令的情况下,将所述第二摄像头在所述第一图像中的位置作为起始轨迹点;
在检测到将所述第三摄像头作为终止轨迹点的指令的情况下,将所述第三摄像头在所述第一图像中的位置作为终止轨迹点;
将针对所述起始轨迹点和所述终止轨迹点输入的轨迹,作为所述第三轨迹。
在本公开的一些实施例中,所述至少一个摄像头还包括第四摄像头,所述第四摄像头与所述第二摄像头不同,且所述第四摄像头与所述第三摄像头不同;
所述将针对所述起始轨迹点和所述终止轨迹点输入的轨迹,作为所述第三轨迹之前,所述方法还包括:
在检测到将所述第四摄像头作为中间轨迹点的指令的情况下,将所述第四摄像头在所述第一图像中的位置作为中间轨迹点;
所述将针对所述起始轨迹点和所述终止轨迹点输入的轨迹,作为所述第三轨迹,包括:
将针对所述起始轨迹点、所述中间轨迹点和所述终止轨迹点输入的轨迹,作为所述第三轨迹,所述第三轨迹的端点分别为所述起始轨迹点和所述终止轨迹点,且所述第三轨迹包括所述中间轨迹点。
在本公开的一些实施例中,所述得到所述第三轨迹之后,所述方法还包括:
按第一预设显示方式在所述第一图像中显示所述起始轨迹点和所述终止轨迹点。
在本公开的一些实施例中,所述方法还包括:
获取第二场景的第四图像,所述第一场景包括所述第二场景;
按第二预设显示方式显示所述第一图像的缩略图和所述第四图像的缩略图。
在本公开的一些实施例中,所述第一图像的缩略图和所述第四图像的缩略图显示于显示页面的第一区域,所述显示页面还包括与所述第一区域不同的第二区域;
所述方法还包括:
在检测到点击所述第四图像的缩略图的情况下,在所述第二区域内显示所述第四图像。
在本公开的一些实施例中,所述方法还包括:
获取第二场景的第四图像,所述第一场景包括所述第二场景;
基于所述第一图像和所述第四图像,得到所述第一场景的缩略图。
在本公开的一些实施例中,所述获取所述第一场景中的至少一个摄像头在所述第一图像中的位置信息,包括:
在检测到标注摄像头位置的指令的情况下,在标注页面显示所述第一场景内的至少一个摄像头和所述第一图像;
在检测到将所述至少一个摄像头移动至所述第一图像的移动指令的情况下,依据所述移动指令,得到所述第一场景中的至少一个摄像头在所述第一图像中的位置信息。
在本公开的一些实施例中,所述获取所述第一场景中的至少一个摄像头在所述第一图像中的位置信息,包括:
在检测到标注摄像头位置的指令的情况下,在标注页面显示所述第一场 景内的至少一个摄像头;
在检测到针对所述至少一个摄像头的位置输入指令的情况下,显示位置输入框;
依据所述位置输入框内的位置,得到所述至少一个摄像头在第一图像中的位置信息。
在本公开的一些实施例中,所述得到所述至少一个摄像头在第一图像中的位置信息之后,所述方法还包括:
在检测到点击所述至少一个摄像头中的目标摄像头的情况下,按所述第一预设显示方式在所述第一图像中显示所述目标摄像头。
在本公开的一些实施例中,所述标注页面还包括以下至少一种信息:所述至少一个摄像头是否已在所述第一图像中标记、所述至少一个摄像头名称、所述至少一个摄像头的类型、所述至少一个摄像头的预览按钮、所述至少一个摄像头的详情信息查看按钮。
在本公开的一些实施例中,在检测到点击所述至少一个摄像头的详情信息查看按钮的情况下,显示以下至少一种信息:所述第一场景的名称、所述至少一个摄像头的接入状态、所述至少一个摄像头所拍摄的对象的移动方向。
在本公开的一些实施例中,在检测到点击所述至少一个摄像头的预览按钮的情况下,显示所述至少一个摄像头的采集画面。
在本公开的一些实施例中,所述第一图像为所述第一场景的鸟瞰图。
第二方面,提供了一种图像处理装置,所述装置包括:
获取部分,被配置为获取第一场景的第一图像以及所述第一场景中的至少一个摄像头在所述第一图像中的位置信息,所述第一场景为地图缺失场景;
所述获取部分,被配置为获取目标对象的第二图像,所述第二图像由所述第一场景内的第一摄像头采集得到;
确定部分,被配置为依据所述至少一个摄像头在所述第一图像中的位置信息,确定所述第一摄像头在所述第一图像中的第一位置;
所述确定部分,还被配置为依据所述第一位置,确定所述目标对象在所述第一图像中的位置。
在本公开的一些实施例中,所述获取部分,被配置为获取黑名单库;从至少一张第三图像中确定包含所述黑名单库中的对象的图像,作为所述第二图像,所述至少一张第三图像由所述至少一个摄像头采集得到。
在本公开的一些实施例中,所述获取部分,还被配置为获取所述目标对象在所述第一图像中的第二位置,所述第二位置与所述第一位置不同;依据所述第一位置和所述第二位置,在所述第一图像中显示所述目标对象的第一 轨迹,所述第一轨迹包括可行驶轨迹。
在本公开的一些实施例中,所述获取部分,还被配置为在所述依据所述第一位置和所述第二位置,在所述第一图像中显示所述目标对象的第一轨迹之前,获取所述第一图像中的至少一条第二轨迹,所述至少一条第二轨迹均包括所述可行驶轨迹;
所述确定部分,被配置为在所述至少一条第二轨迹中目标轨迹的轨迹点包括所述第一位置和所述第二位置的情况下,确定所述目标轨迹为所述第一轨迹。
在本公开的一些实施例中,所述至少一条第二轨迹包括第三轨迹,所述获取部分,被配置为在检测到针对所述第一图像的轨迹设置指令的情况下,显示轨迹设置页面,所述轨迹设置页面包括所述第一图像和所述第一场景内的至少一个摄像头;所述第一场景内的至少一个摄像头包括第二摄像头和与所述第二摄像头不同的第三摄像头;在检测到将所述第二摄像头作为起始轨迹点的指令的情况下,将所述第二摄像头在所述第一图像中的位置作为起始轨迹点;在检测到将所述第三摄像头作为终止轨迹点的指令的情况下,将所述第三摄像头在所述第一图像中的位置作为终止轨迹点;将针对所述起始轨迹点和所述终止轨迹点输入的轨迹,作为所述第三轨迹。
在本公开的一些实施例中,所述至少一个摄像头还包括第四摄像头,所述第四摄像头与所述第二摄像头不同,且所述第四摄像头与所述第三摄像头不同;
所述确定部分,还被配置为在将针对所述起始轨迹点和所述终止轨迹点输入的轨迹,作为所述第三轨迹之前,在检测到将所述第四摄像头作为中间轨迹点的指令的情况下,将所述第四摄像头在所述第一图像中的位置作为中间轨迹点;
所述获取部分,被配置为将针对所述起始轨迹点、所述中间轨迹点和所述终止轨迹点输入的轨迹,作为所述第三轨迹,所述第三轨迹的端点分别为所述起始轨迹点和所述终止轨迹点,且所述第三轨迹包括所述中间轨迹点。
在本公开的一些实施例中,所述图像处理装置还包括:显示部分,被配置为在所述得到所述第三轨迹之后,按第一预设显示方式在所述第一图像中显示所述起始轨迹点和所述终止轨迹点。
在本公开的一些实施例中,所述获取部分,还被配置为获取第二场景的第四图像,所述第一场景包括所述第二场景;
所述显示部分,还被配置为按第二预设显示方式显示所述第一图像的缩略图和所述第四图像的缩略图。
在本公开的一些实施例中,所述第一图像的缩略图和所述第四图像的缩略图显示于显示页面的第一区域,所述显示页面还包括与所述第一区域不同的第二区域;
所述显示部分,还被配置为在检测到点击所述第四图像的缩略图的情况下,在所述第二区域内显示所述第四图像。
在本公开的一些实施例中,所述获取部分,还被配置为获取第二场景的第四图像,所述第一场景包括所述第二场景;基于所述第一图像和所述第四图像,得到所述第一场景的缩略图。
在本公开的一些实施例中,所述获取部分,还被配置为在检测到标注摄像头位置的指令的情况下,在标注页面显示所述第一场景内的至少一个摄像头和所述第一图像;在检测到将所述至少一个摄像头移动至所述第一图像的移动指令的情况下,依据所述移动指令,得到所述第一场景中的至少一个摄像头在所述第一图像中的位置信息。
在本公开的一些实施例中,所述获取部分,还被配置为在检测到标注摄像头位置的指令的情况下,在标注页面显示所述第一场景内的至少一个摄像头;
所述显示部分,还被配置为在检测到针对所述至少一个摄像头的位置输入指令的情况下,显示位置输入框;
所述获取部分,还被配置为依据所述位置输入框内的位置,得到所述至少一个摄像头在第一图像中的位置信息。
在本公开的一些实施例中,所述显示部分,还被配置为在检测到点击所述至少一个摄像头中的目标摄像头的情况下,按所述第一预设显示方式在所述第一图像中显示所述目标摄像头。
在本公开的一些实施例中,所述标注页面还包括以下至少一种信息:所述至少一个摄像头是否已在所述第一图像中标记、所述至少一个摄像头名称、所述至少一个摄像头的类型、所述至少一个摄像头的预览按钮、所述至少一个摄像头的详情信息查看按钮。
在本公开的一些实施例中,所述显示部分,还被配置为在检测到点击所述至少一个摄像头的详情信息查看按钮的情况下,显示以下至少一种信息:所述第一场景的名称、所述至少一个摄像头的接入状态、所述至少一个摄像头所拍摄的对象的移动方向。
在本公开的一些实施例中,所述显示部分,还被配置为在检测到点击所述至少一个摄像头的预览按钮的情况下,显示所述至少一个摄像头的采集画面。
在本公开的一些实施例中,所述第一图像为所述第一场景的鸟瞰图。
第三方面,提供了一种电子设备,包括:处理器和存储器,所述存储器用于存储计算机程序代码,所述计算机程序代码包括计算机指令,在所述处理器执行所述计算机指令的情况下,所述电子设备执行如上述第一方面及其任意一种可能的实现方式的方法。
第四方面,提供了另一种电子设备,包括:处理器、发送装置、输入装置、输出装置和存储器,所述存储器用于存储计算机程序代码,所述计算机程序代码包括计算机指令,在所述处理器执行所述计算机指令的情况下,所述电子设备执行如上述第一方面及其任意一种可能的实现方式的方法。
第五方面,提供了一种计算机可读存储介质,所述计算机可读存储介质中存储有计算机程序,所述计算机程序包括程序指令,在所述程序指令被处理器执行的情况下,使所述处理器执行如上述第一方面及其任意一种可能的实现方式的方法。
第六方面,提供了一种计算机程序产品,所述计算机程序产品包括计算机程序或指令,在所述计算机程序或指令在计算机上运行的情况下,使得所述计算机执行上述第一方面及其任一种可能的实现方式的方法。
第七方面,提供了一种计算机程序,包括计算机可读代码,当所述计算机可读代码在电子设备中运行时,所述计算机设备中的处理器执行用于实现上述第一方面及其任一种可能的实现方式的方法。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,而非限制本公开实施例。
附图说明
图1为本公开实施例提供的一种图像处理***架构示意图;
图2为本公开实施例提供的一种图像处理方法的流程示意图;
图3为本公开实施例提供的一种地图示意图;
图4为本公开实施例提供的一种地图缺失场景示意图;
图5为本公开实施例提供的一种像素坐标系示意图;
图6为本公开实施例提供的另一种地图缺失场景示意图;
图7为本公开实施例提供的一种以第二预设显示方式显示第一图像的缩略图和第二图像的缩略图的示意图;
图8为本公开实施例提供的一种显示页面示意图;
图9为本公开实施例提供的一种轨迹设置页面示意图;
图10为本公开实施例提供的一种以第二预设显示方式显示第一图像和第四图像的示意图;
图11为本公开实施例提供的一种图像处理装置的结构示意图;
图12为本公开实施例提供的一种电子设备的硬件结构示意图。
具体实施方式
为了使本技术领域的人员更好地理解本公开方案,下面将结合本公开实施例中的附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本公开一部分实施例,而不是全部的实施例。基于本公开中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本公开保护的范围。
本公开的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别不同对象,而不是用于描述特定顺序。此外,术语“包括”和“具有”以及它们任何变形,意图在于覆盖不排他的包含。例如包含了一系列步骤或部分的过程、方法、***、产品或设备没有限定于已列出的步骤或部分,而是可选地还包括没有列出的步骤或部分,或可选地还包括对于这些过程、方法、产品或设备固有的其他步骤或部分。
应当理解,在本公开实施例中,“至少一个(项)”是指一个或者多个,“多个”是指两个或两个以上,“至少两个(项)”是指两个或三个及三个以上,字符“/”可表示前后关联对象是一种“或”的关系,是指这些项中的任意组合,包括单项(个)或复数项(个)的任意组合。例如,a,b或c中的至少一项(个),可以表示:a,b,c,“a和b”,“a和c”,“b和c”,或“a和b和c”,其中a,b,c可以是单个,也可以是多个。字符“/”还可表示数学运算中的除号,例如,a/b=a除以b;6/3=2。“以下至少一项(个)”或其类似表达。
在本文中提及“实施例”意味着,结合实施例描述的特定特征、结构或特性可以包含在本公开的至少一个实施例中。在说明书中的各个位置出现该短语并不一定均是指相同的实施例,也不是与其它实施例互斥的独立的或备选的实施例。本领域技术人员显式地和隐式地理解的是,本文所描述的实施例可以与其它实施例相结合。
通过对目标对象在场景下的位置进行可视化显示,可更直观的体现目标对象在场景中的位置。而在可视化之前,需要在场景的可视化数据中对目标对象进行定位。
传统方法通过将场景的地图作为场景的可视化数据,并确定目标对象在场景的地图中的位置,实现在场景的可视化数据中对目标对象进行定位。但在地图缺失场景的细节信息的情况下,传统方法的定位准确度不高,其中,细节信息包括以下至少一种:建筑物信息、道路信息。基于此,本公开实施例提供了一种技术方案,以在地图上缺失场景细节信息的情况下,提高在场景中定位的准确度。
本公开实施例的执行主体为图像处理装置,其中,图像处理装置可以是任意一种可执行本公开方法实施例所公开的技术方案的电子设备。例如,图像处理装置可以是以下中的一种:手机、计算机、平板电脑、可穿戴智能设备。
应理解,本公开方法实施例还可以通过处理器执行计算机程序代码的方式实现。下面结合本公开实施例中的附图对本公开实施例进行描述。
请参阅图1,图1所示为本公开实施例提供的一种图像处理***11架构示意图。在图1中,图像处理装置112与至少一个摄像头111之间具有通信连接。示例性的,图像处理装置112可以是服务器。至少一个摄像头111和图像处理装置112可部署在地图缺失场景内。
示例性的,该地图缺失场景为监管区域内的楼栋,例如,封闭的园区。
至少一个摄像头111用于楼栋内的图像和视频中的至少之一。图像处理装置112基于下文提供的技术方案对至少一个摄像头111所采集到的图像和视频中至少之一进行处理,确定目标对象在楼栋内的位置。例如,至少一个摄像头111包括第一摄像头。图像处理装置获取包含楼栋内场景的第一图像,以及第一摄像头采集的第二图像,其中,第二图像包含目标对象。图像处理装置112基于下文提供的技术方案可确定目标对象在第一图像中的位置,即目标对象在楼栋内场景的位置。
请参阅图2,图2是本公开实施例提供的一种图像处理方法的流程示意图。
201、获取第一场景的第一图像以及第一场景中至少一个摄像头在上述第一图像中的位置信息,上述第一场景包括地图缺失场景。
本公开实施例中,地图缺失场景包括地图上缺失细节信息的区域,其中,细节信息包括以下至少一种:建筑物信息、道路信息。
例如,在图3所示的地图中,既缺失了平安金融中心周边的道路信息,又缺失了平安金融中心的建筑物信息。其中,平安金融中心周边的道路信息以及平安金融中心的建筑物信息,可参见图4所示的平安金融中心以及平安金融中心周边的道路信息。示例性的,第一场景包括以下中的至少一个:监管场所、地铁站。
本公开实施例中,第一场景中的至少一个摄像头为,拍摄范围位于第一场景内的至少一个摄像头。至少一个摄像头可以是一个摄像头,也可以是两个或两个以上摄像头。
本公开实施例中,图像中的位置可以是图像的像素坐标下的位置,其中,像素坐标系的横坐标用于表示像素点所在的列数,像素坐标系下的纵坐标用于表示像素点所在的行数。
例如,在图5所示的图像中,以图像的左上角为坐标原点O、平行于图像的行的方向为X轴的方向、平行于图像的列的方向为Y轴的方向,构建像素坐标系为XOY。横坐标和纵坐标的单位均为像素点。例如,图5中的像素点A11的坐标为(1,1),像素点A23的坐标为(3,2),像素点A42的坐标为(2,4),像素点A34的坐标为(4,3)。
本公开实施例中,依据至少一个摄像头在第一图像中的位置信息,可确定至少一个摄像头在第一图像中的位置。摄像头在第一图像中的位置即为摄像头在第一图像的像素坐标系下的位置,该位置与摄像头在第一场景中的位置对应。
例如,摄像头a安装在第一场景内的A点。若在第一图像中,与A点对应的像素为像素点B。那么,像素点B在第一图像中的位置即为摄像头a在第一图像中的位置。
在一种获取第一图像的实现方式中,图像处理装置接收用户通过输入组件输入的第一图像。
在另一种获取第一图像的实现方式中,图像处理装置接收终端发送的第一图像。
在一种获取至少一个摄像头在第一图像中的位置信息的实现方式中,图像处理装置接收用户通过输入组件输入的第一图像。
在另一种获取至少一个摄像头在第一图像中的位置信息的实现方式中,图像处理装置接收终端发送的第一图像。
应理解,在本公开实施例中,获取第一图像的步骤和获取至少一个摄像头在第一图像中的位置信息的步骤可以分开执行,也可以同时执行。例如,图像处理装置可先获取第一图像,再获取至少一个摄像头在第一图像中的位置信息。又例如,图像处理装置可先获取至少一个摄像头在第一图像中的位置信息,再获取第一图像。再例如,图像处理装置在获取第一图像的过程中获取至少一个摄像头在第一图像中的位置信息,或在获取至少一个摄像头在第一图像中的位置信息的过程中获取第一图像。
202、获取目标对象的第二图像,上述第二图像由上述第一场景内的第一 摄像头采集得到。
本公开实施例中,目标对象可以是任意物体。在一种可能实现的方式中,目标对象包括以下中的一个:人体、人脸、车辆。
本公开实施例中,目标对象的第二图像,即第二图像包括目标对象。第一摄像头为上述至少一个摄像头中的任意一个摄像头。
在一种获取第二图像的实现方式中,图像处理装置接收用户通过输入组件输入的第二图像。
在另一种获取第二图像的实现方式中,图像处理装置接收终端发送的第二图像。
在又一种获取第二图像的实现方式中,图像处理装置与第一摄像头之间存在通信连接。图像处理装置通过该通信连接从第一摄像头获取第二图像。
203、依据上述至少一个摄像头在上述第一图像中的位置信息,确定上述第一摄像头在上述第一图像中的第一位置。
204、依据上述第一位置,确定上述目标对象在上述第一图像中的位置。
由于第二图像包含目标对象,第二图像由第一摄像头采集得到,依据第一摄像头在第一图像中的位置,可得到目标对象在第一图像中的位置。示例性的,图像处理装置将第一位置作为第一图像中的位置。
例如,假设第一场景为监管区域,目标对象为监管区域的被监管员小明。图像处理装置通过执行本实施例提供的技术方案,可确定小明在第一图像中出现过的位置。
在一种可能实现的方式中,图像处理装置可从第一图像中确定与第一位置对应的像素点,并在第一图像中显示出该像素点。
在另一种可能实现的方式中,图像处理装置在第一位置处显示第一摄像头。
在又一种可能实现的方式中,图像处理装置在第一位置处显示第二图像。
本公开实施例中,图像处理装置依据采集目标对象的第二图像的第一摄像头在第一图像中的位置,确定目标对象在第一图像中的位置,从而提高在第一场景中定位的准确度。
在一些实施例中,图像处理装置在确定目标对象在第一图像中的位置后,在第一图像中显示目标对象在第一图像中的位置。这样,可将目标对象在第一场景内出现过的位置在第一图像中体现出来,从而实现对目标对象在第一场景内的位置进行可视化显示。
作为一种可选的实施方式,图像处理装置通过执行以下步骤获取目标对象的第二图像:
1、获取黑名单库。
本公开实施例中,黑名单库中包含找寻对象的人脸图像。例如,黑名单库包含张三的人脸图像,此时,找寻对象为张三。又例如,相关人员想对B地的找寻对象进行查找,可将B地的找寻对象的人脸图像存储至黑名单库。
在一种获取黑名单库的实现方式中,图像处理装置接收用户通过输入组件输入的黑名单库。
在另一种获取黑名单库的实现方式中,图像处理装置接收终端发送的黑名单库。
2、从至少一张第三图像中确定包含上述黑名单库中的对象的第四图像,作为上述第二图像,上述至少一张第三图像由上述至少一个摄像头采集得到。
第三图像由第一场景内的至少一个摄像头采集得到,若黑名单库中的对象出现在第三图像中,说明黑名单库中的对象出现在了第一场景中。此时,依据至少一张第三图像中包含黑名单库中的对象,可确定黑名单库中的对象在第一场景中的位置。
在一种可能实现的方式中,图像处理装置通过将至少一张第三图像分别与黑名单库中的人脸图像进行人脸比对,确定至少一张第三图像中包含黑名单库中的对象的图像,作为第二图像。这样,可结合步骤201~步骤204,确定黑名单库中的对象在第一图像中的位置。
例如,黑名单库包括张三的人脸图像和李四的人脸图像。至少一张第三图像包括第三图像a和第三图像b。图像处理装置通过将第三图像a和张三的人脸图像进行人脸比对,确定第三图像a包含张三,进而将第三图像a作为第二图像。结合步骤201~步骤204的技术方案,可进一步确定张三在第一图像中的位置。
在步骤1和步骤2中,图像处理装置通过确定至少一张第三图像中是否包含黑名单库中的对象,确定黑名单库中的对象是否出现在第一场景内。在确定黑名单库中的对象出现在第一场景内的情况下,结合步骤201~步骤204可进一步确定该对象在第一图像中的位置。
作为一种可选的实施方式,图像处理装置还执行以下步骤:
3、获取上述目标对象在上述第一图像中的第二位置,上述第二位置与上述第一位置不同。
示例性的,第一位置的时间戳和第二位置的时间戳不同。在一种获取第二位置的实现方式中,图像处理装置接收用户通过输入组件输入的第二位置。
在另一种获取第二位置的实现方式中,图像处理装置接收终端发送的第二位置。
4、依据上述第一位置和上述第二位置,在上述第一图像中显示上述目标对象的第一轨迹,上述第一轨迹包括上述道路上的轨迹。
本公开实施例中,可行驶轨迹包括人或车可行驶的轨迹。在一些实施例中,在目标对象包括人的情况下,可行驶轨迹包括人可行驶的轨迹;在目标对象包括车的情况下,可行驶轨迹包括车可行驶的轨迹。
例如,人不能穿墙的,那么穿墙的轨迹不是人可行驶的轨迹。人可以穿过狭小过道,那么狭小过道上的轨迹是人可行驶的轨迹。人可以穿过大厦大门,那么穿过大厦大门的轨迹是人可行驶的轨迹
又例如,车不能穿过大厦,那么穿过大厦的轨迹不是车可行驶的轨迹。车可以在公路上行驶,那么公路上的轨迹是车可行驶的轨迹。
示例性的,第一图像包括第一场景内的至少一条道路,即第一图像可展示第一场景内的至少一条道路。例如,图6所示为平安金融中心所处场景的图像,该图像可展示第一道路和第二道路。可行驶轨迹包括至少一条道路上的轨迹。
在一种可能实现的方式中,图像处理装置将第一位置和第二位置分别作为两个端点,确定过第一位置和第二位置的轨迹,作为第一轨迹。
由于图像包含道路信息,依据位置确定的轨迹可能包括不合理的轨迹。例如,对人而言,穿墙的轨迹的是不合理的。而本公开实施例中的第一轨迹包括可行驶轨迹,由此可提高目标对象的轨迹的准确度。
作为一种可选的实施方式,图像处理装置在执行步骤4之前,还执行以下步骤:
5、获取上述第一图像中的至少一条第二轨迹,上述至少一条第二轨迹均包括上述可行驶轨迹。
在一种获取至少一条第二轨迹的实现方式中,图像处理装置接收用户通过输入组件输入的至少一条第二轨迹。
在另一种获取至少一条第二轨迹的实现方式中,图像处理装置接收终端发送的至少一条第二轨迹。
在执行完步骤5后,图像处理装置在执行步骤5的过程中执行以下步骤:
6、在上述至少一条第二轨迹中目标轨迹的轨迹点包括上述第一位置和上述第二位置的情况下,确定上述目标轨迹为上述第一轨迹。
本公开实施例中,目标轨迹的轨迹点包括第一位置和第二位置,即目标轨迹过第一位置和第二位置。
示例性的,第一位置所对应的点和第二位置所对应的点为目标轨迹的两个端点。
例如,第一位置所对应的点为目标轨迹的起始点,第二位置所对应的点为目标轨迹的终止点。
又例如,第二位置所对应的点为目标轨迹的起始点,第一位置所对应的点为目标轨迹的终止点。
在该种实施方式中,由于至少一条第二轨迹均为,图像处理装置在确定目标对象在第一图像中的第一轨迹前获取的,且至少一条第二轨迹均包括可行驶轨迹,图像处理装置可从至少一条第二轨迹中确定轨迹点包括第一位置和第二位置的轨迹,作为第一轨迹。
作为一种可选的实施方式,至少一条第二轨迹包括第三轨迹,即第三轨迹为至少一条第二轨迹中的任意一条轨迹。图像处理装置在执行步骤4的过程中执行以下步骤:
7、在检测到针对上述第一图像的轨迹设置指令的情况下,显示轨迹设置页面,上述轨迹设置页面包括上述第一图像和上述第一场景内的至少一个摄像头,上述第一场景内的至少一个摄像头包括第二摄像头和与上述第二摄像头不同的第三摄像头。
本公开实施例中,轨迹设置指令可以是用户通过输入组件向图像处理装置输入的,轨迹设置指令也可以是用户通过终端向图像处理装置发送的。轨迹设置指令用于指示图像处理装置进入轨迹设置程序。
图像处理装置在检测到针对第一图像的轨迹设置指令的情况下,显示轨迹设置页面,并在轨迹设置页面显示第一图像和第一场景内的至少一个摄像头。
本步骤中,第一场景内的至少一个摄像头包括第二摄像头和与第二摄像头不同的第三摄像头。即本步骤中,第一场景内包括两个或两个以上摄像头,第二摄像头和第三摄像头为第一场景内的任意两个不同的摄像头。
8、在检测到将上述第二摄像头作为起始轨迹点的指令的情况下,将上述第二摄像头在上述第一图像中的位置作为起始轨迹点。
在一种可能实现的方式中,图像处理装置在轨迹设置页面显示“请选择起始轨迹点”,用户进而通过点击第二摄像头,向图像处理装置输入将第二摄像头作为起始轨迹点的指令。
在另一种可能实现的方式中,图像处理装置在轨迹设置页面显示“请输入起始轨迹点”,用户进而通过输入第二摄像头的名称,向图像处理装置输入将第二摄像头作为起始轨迹点的指令。
9、在检测到将上述第三摄像头作为终止轨迹点的指令的情况下,将上述第三摄像头在上述第一图像中的位置作为终止轨迹点。
在一种可能实现的方式中,图像处理装置在轨迹设置页面显示“请选择终止轨迹点”,用户进而通过点击第三摄像头,向图像处理装置输入将第三摄像头作为起始轨迹点的指令。
在另一种可能实现的方式中,图像处理装置在轨迹设置页面显示“请输入终止轨迹点”,用户进而通过输入第三摄像头的名称,向图像处理装置输入将第三摄像头作为起始轨迹点的指令。
10、接收针对起始轨迹点和终止轨迹点输入的轨迹,作为第三轨迹。
由于第一图像缺失路网信息,图像处理装置依据起始轨迹点和终止轨迹点确定的轨迹,可能不是可行驶轨迹。而用户可通过显示页面中所显示的第一图像,确定起始轨迹点和终止轨迹点之间的可行驶轨迹。因此,图像处理装置将针对起始轨迹点和终止轨迹点输入的轨迹,作为第三轨迹。
在一种可能实现的方式中,图像处理装置通过触摸显示屏显示轨迹设置页面。用户通过触摸显示屏连接起始轨迹点和终止轨迹点,构建端点为起始轨迹点和终止轨迹点的轨迹,从而向图像处理装置输入针对起始轨迹点和终止轨迹点的轨迹。
应理解,步骤7~步骤10中的第三轨迹仅为示例,不应理解仅能通过步骤7~步骤10得到至少一条第二轨迹中的一条轨迹。在实际应用中,图像处理装置可通过执行步骤7~步骤10得到所有第二轨迹。
作为一种可选的实施方式,至少一个摄像头还包括第四摄像头,第四摄像头与第二摄像头不同,且第四摄像头与第三摄像头不同。
图像处理装置在执行步骤10之前,还执行以下步骤:
11、在检测到将上述第四摄像头作为中间轨迹点的指令的情况下,将上述第四摄像头在上述第一图像中的位置作为中间轨迹点。
在一种可能实现的方式中,图像处理装置在轨迹设置页面显示“请选择中间轨迹点”,用户进而通过点击第四摄像头,向图像处理装置输入将第四摄像头作为起始轨迹点的指令。
在另一种可能实现的方式中,图像处理装置在轨迹设置页面显示“请输入中间轨迹点”,用户进而通过输入第四摄像头的名称,向图像处理装置输入将第四摄像头作为起始轨迹点的指令。
在执行完步骤11后,图像处理装置通过执行以下步骤得到第三轨迹:
12、将针对上述起始轨迹点、上述中间轨迹点和上述终止轨迹点输入的轨迹,作为上述第三轨迹,上述第三轨迹的端点分别为上述起始轨迹点和上述终止轨迹点,且上述第三轨迹包括上述中间轨迹点。
在一种可能实现的方式中,图像处理装置通过触摸显示屏显示轨迹设置 页面。用户通过触摸显示屏连接起始轨迹点、中间轨迹点和终止轨迹点,构建端点分别为起始轨迹点和终止轨迹点且过中间轨迹点的轨迹,从而向图像处理装置输入针对起始轨迹点、中间轨迹点和终止轨迹点的轨迹。
应理解,第二摄像头、第三摄像头和第四摄像头均为示例,不应理解为第一场景仅包括三个摄像头。
图像处理装置通过执行步骤11和步骤12,得到端点为起始轨迹点和终止轨迹点的轨迹,可使第三轨迹的轨迹信息更详细。
作为一种可选的实施方式,图像处理装置在得到第三轨迹后,还执行以下步骤:
13、按第一预设显示方式在第一图像中显示起始轨迹点和终止轨迹点。
本公开实施例中,第一预设显示方式可以是着重显示,即在步骤13中,图像处理装置在第一图像中着重显示起始轨迹点和终止轨迹点。第一图像中着重显示起始轨迹点和终止轨迹点指,起始轨迹点的显示方式和终止轨迹点的显示方式与非着重显示区域的显示方式不同,其中,非着重显示区域包括第一图像中除起始轨迹点和终止轨迹点之外的区域。
示例性的,第一预设显示方式包括以下中的一种或一种以上:颜色突出显示、高亮显示、悬浮显示。
在一种可能实现的方式中,第一预设显示方式包括颜色突出显示。图像处理装置将非着重显示区域转换为灰度图像,并保留起始轨迹点的色彩和终止轨迹点的色彩,以实现突出显示起始轨迹点和终止轨迹点。
在另一种可能实现的方式中,第一预设显示方式包括高亮显示。图像处理装置对起始轨迹点和终止轨迹点进行高亮显示,以实现突出显示起始轨迹点和终止轨迹点。
在又一种可能实现的方式中,第一预设显示方式包括悬浮显示。第一图像包括抬头显示(head up display,HUD)层,图像处理装置从HUD层中确定与起始轨迹点对应的第一显示区域,并从HUD层中确定与终止轨迹点对应的第二显示区域。将第一显示区域和第二显示区域作为悬浮显示区域,并在第一显示区域中显示起始轨迹点,在第二显示区域中显示终止轨迹点。
图像处理装置通过以第一预设显示方式显示在第一图像中显示起始轨迹点和终止轨迹点,可使用户更直观的从第一图像中确定第三轨迹。
作为一种可选的实施方式,图像处理装置还执行以下步骤:
14、获取第二场景的第四图像,其中,第一场景包括第二场景。
本公开实施例中,第一场景和第二场景为隶属关系。
例如,第一场景为校园,第二场景为校园内的教学楼。
又例如,第一场景为教学楼,第二场景为教学楼内的三楼。
再例如,第一场景为教学楼内的三楼,第二场景为三楼的308教室。示例性的,第二场景包括地图缺失场景。
在一种获取第四图像的实现方式中,图像处理装置接收用户通过输入组件输入的第四图像。
在另一种获取第四图像的实现方式中,图像处理装置接收终端发送的第四图像。
15、按第二预设显示方式显示第一图像的缩略图和第四图像的缩略图。
本公开实施例中,第二预设显示方式可以是以层级的方式进行显示,即在步骤15中,图像处理装置以层级的方式显示第一图像的缩略图和第四图像的缩略图。例如,在图7中,第一图像的缩略图和第四图像的缩略图。
图像处理装置通过执行步骤15,可模拟第一场景的三维模型,取得三维显示效果,并可更好的显示第一场景和第二场景的层级关系。
作为一种可选的实施方式,第一图像的缩略图和第四图像的缩略图显示于显示页面的第一区域,显示页面还包括与第一区域不同的第二区域。图像处理装置还执行以下步骤:
16、在检测到点击第四图像的缩略图的情况下,在第二区域内显示第四图像。
图像处理装置通过执行步骤15,可模拟第一场景的三维模型,该三维模型显示于第一区域。而用户通过点击第一区域内的三维模型中的任意一张缩略图,可使图像处理装置在第二区域内显示与该缩略图对应的场景图像。因此,图像处理装置在检测到点击第四图像的缩略图的情况下,在第二区域内显示第四图像。
示例性的,在第二区域内所显示的图像为第四图像,且检测到用户点击第一区域内的第一图像的缩略图的情况下,图像处理装置将第二区域内所显示的图像由第四图像切换为第一图像。
在一种可能实现的方式中,图8所示的图像为步骤15中的显示页面。用户点击第一区域内的第一图像的缩略图,图像处理装置将在第二区域内显示第一图像。
示例性的,图9所示为轨迹设置页面。轨迹显示页面的左侧为场景信息显示区域,场景信息显示区域用于显示场景信息。如图9所示,场景信息显示区域包括:工厂、医务室、监区3、监区4、监区1、监区2、医院。轨迹显示页面的中间为图像显示区域,图像显示区域用于显示图像。图9所示的图像显示区域可参见图8。轨迹显示页面的右侧为摄像头信息显示区域,摄像 头信息显示区域用于显示摄像头的信息。
作为一种可选的实施方式,图像处理装置还执行以下步骤:
17、获取第二场景的第四图像,第一场景包括第二场景。
本步骤的实现方式可参见步骤14。
18、基于第一图像和第四图像,得到第一场景的缩略图。
在一种可能实现的方式中,图像处理装置基于第一图像得到第一图像的缩略图,基于第四图像得到第四图像的缩略图。通过以第二预设显示方式显示第一图像的缩略图和第四图像的缩略图,模拟第一场景的三维模型,得到第一场景的缩略图。
例如,如图10所示,对第一图像的缩略图和第四图像的缩略图进行层级显示,得到第一场景的缩略图。
作为一种可选的实施方式,图像处理装置通过执行以下步骤获取第一场景中的至少一个摄像头在第一图像中的位置信息:
19、在检测到标注摄像头位置的指令的情况下,在标注页面显示上述第一场景内的至少一个摄像头和上述第一图像。
本步骤中,标注摄像头位置的指令可以是,用户通过鼠标控制光标点击显示页面的标注按钮。标注摄像头位置的指令也可以是,用户通过输入组件向图像处理装置输入标注指令,例如,用户通过键盘向图像处理装置输入字符“start”,向图像处理装置输入标注指令。又例如,用户通过向图像处理装置输入语音数据“开始标注摄像头位置”,向图像处理装置输入标注指令。
在一种可能实现的方式中,图像处理装置在标注页面内显示第一场景内的至少一个摄像头可以是,在标注页面中显示第一场景内的至少一个摄像头的标识,其中,该标识可以是以下中的一个或一个以上:图标、数字标识、文字标识。
示例性的,图像处理装置的标注页面包括以下区域:摄像头显示区域、图像显示区域,其中,摄像头显示区域用于显示第一场景内的至少一个摄像头,摄像头显示区域用于显示第一场景的图像(即第一图像)。
例如,若第一场景为监管区域,该监管区域内安装有3个监控摄像头,分别为监控摄像头a、监控摄像头b和监控摄像头c。在摄像头显示区域内可显示监控摄像头a的标识、监控摄像头b的标识和监控摄像头c的标识。在图像显示区域内可显示监管区域的图像。
20、在检测到将上述至少一个摄像头移动至上述第一图像的移动指令的情况下,依据上述移动指令,得到上述第一场景中的至少一个摄像头在上述第一图像中的位置信息。
在一种可能实现的方式中,用户通过将第一场景内的摄像头拖拽至第一图像中,向图像处理装置输入移动指令。
例如,至少一个摄像头包括摄像头a和摄像头b,用户通过将摄像头a拖拽至第一图像中的A点,并将摄像头b拖拽至第一图像中的B点。此时,移动指令包括将摄像头a移动至第一图像中的A点,以及将摄像头b移动至第一图像中的B点。
在另一种可能实现的方式中,用户通过键盘向图像处理装置输入移动指令,其中,键盘包括物理键盘和虚拟键盘。
例如,至少一个摄像头包括摄像头a和摄像头b,用户通过物理键盘向图像处理装置输入将摄像头a移动至第一图像中的A点,并将摄像头b移动至第一图像中的B点。此时,移动指令包括将摄像头a移动至第一图像中的A点,以及将摄像头b移动至第一图像中的B点。
在将第一场景内的摄像头移动至第一图像后,即可确定第一场景中的至少一个摄像头在第一图像中的位置信息。因此,可依据移动指令得到第一场景中的至少一个摄像头在第一图像中的位置信息。
例如,至少一个摄像头包括摄像头a,移动指令包括将摄像头a移动至第一图像中的A点。图像处理装置依据移动指令,确定摄像头a在第一图像中的位置为A点,进而可依据A点在第一图像中的位置,确定摄像头a在第一图像中的位置信息。
作为一种可选的实施方式,图像处理装置通过执行以下步骤获取第一场景中的至少一个摄像头在第一图像中的位置信息:
21、在检测到标注摄像头位置的指令的情况下,在标注页面显示上述第一场景内的至少一个摄像头。
本步骤中,标注摄像头位置的指令可以是,用户通过鼠标控制光标点击显示页面的标注按钮。标注摄像头位置的指令也可以是,用户通过输入组件向图像处理装置输入标注指令,例如,用户通过键盘向图像处理装置输入字符“start”,向图像处理装置输入标注指令。又例如,用户通过向图像处理装置输入语音数据“开始标注摄像头位置”,向图像处理装置输入标注指令。
在一种可能实现的方式中,图像处理装置在标注页面内显示第一场景内的至少一个摄像头可以是,在标注页面中显示第一场景内的至少一个摄像头的标识,其中,该标识可以是以下中的一个或一个以上:图标、数字标识、文字标识。
22、在检测到针对上述至少一个摄像头的位置输入指令的情况下,显示位置输入框。
本公开实施例中,位置输入指令可以是用户通过输入组件向图像处理装置输入的,位置输入指令可以是用户通过终端向图像处理装置发送的。
本公开实施例中,输入框用于输入位置,且输入框内的位置表征第一图像中的位置。例如,用户在输入框内输入(3,4),那么该位置表征在第一图像的像素坐标系下的坐标为(3,4)。
在一种可能实现的方式中,至少一个摄像头包括摄像头a和摄像头b。用户向图像处理装置输入针对摄像头a的位置输入指令,以使图像处理装置显示位置输入框。若用户在位置输入框内输入的位置为位置A,那么图像处理装置将位置A作为摄像头a在第一图像中的位置。
23、依据上述位置输入框内的位置,得到上述至少一个摄像头在第一图像中的位置信息。
在一种可能实现的方式中,图像处理装置依据第一场景内的摄像头所对应的位置输入框内的位置,得到该摄像头在第一图像中的位置信息。
例如,至少一个摄像头包括摄像头a和摄像头b。用户向图像处理装置输入针对摄像头a的位置输入指令,以使图像处理装置显示位置输入框。若用户在位置输入框内输入的位置为位置A,那么图像处理装置将位置A作为摄像头a在第一图像中的位置。用户在输入位置A后,又向图像处理装置输入针对摄像头b的位置输入指令,以使图像处理装置显示位置输入框。若用户在位置输入框内输入的位置为位置B,那么图像处理装置将位置B作为摄像头b在第一图像中的位置。
作为一种可选的实施方式,图像处理装置在得到至少一个摄像头在第一图像中的位置信息之后,还执行以下步骤:
24、在检测到点击至少一个摄像头中的目标摄像头的情况下,按第一预设显示方式在第一图像中显示目标摄像头。
本步骤中的第一预设显示方式可参见步骤13中的第一预设显示方式。
在一种可能实现的方式中,用户点击标注页面中的目标摄像头,图像处理装置在第一图像中高亮显示目标摄像头。
作为一种可选的实施方式,标注页面还包括以下至少一种信息:至少一个摄像头是否已在第一图像中标记、至少一个摄像头名称、至少一个摄像头的类型、至少一个摄像头的预览按钮、至少一个摄像头的详情信息查看按钮。
本公开实施例中,是否在第一图像中标记用于表征摄像头是否已在第一图像中标记,即是否通过步骤19~步骤20确定摄像头在第一图像中的位置信息,或通过步骤21~步骤23确定摄像头在第一图像中的位置信息。摄像头的类型包括球机、枪机、半球机。通过点击摄像头的预览按钮可预览摄像头所 采集的画面。通过点击摄像头的详情信息查看按钮可查看摄像头的详情信息。
作为一种可选的实施方式,在检测到点击第一摄像头的详情信息查看按钮的情况下,显示以下至少一种信息:第一场景的名称、至少一个摄像头的接入状态、至少一个摄像头所拍摄的对象的移动方向。
本公开实施例中,摄像头的接入状态表征图像处理装置是否与摄像头之间存在通信连接。
例如,至少一个摄像头包括摄像头a。若摄像头a与图像处理装置之间不存在通信连接,摄像头a的接入状态为异常;若摄像头b与图像处理装置之间存在通信连接,摄像头b的接入状态为正常。
本公开实施例中,摄像头所拍摄的对象的移动方向包括:进或出。
例如,至少一个摄像头包括摄像头a。摄像头a安装在大楼A的大门处。若摄像头a所拍摄的对象的移动方向包括进,那么摄像头a所拍摄到的对象均为进入大楼A;若摄像头b所拍摄的对象的移动方向包括出,那么摄像头a所拍摄到的对象均为离开大楼A。
作为一种可选的实施方式,在检测到点击至少一个摄像头的预览按钮的情况下,显示至少一个摄像头的采集画面。
例如,至少一个摄像头包括摄像头a和摄像头b,图像处理装置在检测到点击摄像头a的预览按钮的情况下,显示摄像头a的采集画面。图像处理装置在检测到点击摄像头b的预览按钮的情况下,显示摄像头b的采集画面。
作为一种可选的实施方式,第一图像为第一场景的鸟瞰图。
在第一图像为第一场景的鸟瞰图的情况下,第一图像所包含的第一场景的细节信息较多。此时,图像处理装置通过在第一图像中显示目标对象的位置,可视化显示目标对象在第一场景中出现过的位置,可提升显示效果。
本领域技术人员可以理解,在具体实施方式的上述方法中,各步骤的撰写顺序并不意味着严格的执行顺序而对实施过程构成任何限定,各步骤的执行顺序应当以其功能和可能的内在逻辑确定。
上述详细阐述了本公开实施例的方法,下面提供了本公开实施例的装置。
请参阅图11,图11为本公开实施例提供的一种图像处理装置的结构示意图,该图像处理装置2包括:获取部分21、确定部分22。示例性的,图像处理装置2还包括显示部分23。其中:
获取部分21,被配置为获取第一场景的第一图像以及所述第一场景中的至少一个摄像头在所述第一图像中的位置信息,所述第一场景为地图缺失场景;
所述获取部分21,还被配置为获取目标对象的第二图像,所述第二图像 由所述第一场景内的第一摄像头采集得到;
确定部分22,被配置为依据所述至少一个摄像头在所述第一图像中的位置信息,确定所述第一摄像头在所述第一图像中的第一位置;
所述确定部分22,还被配置为依据所述第一位置,确定所述目标对象在所述第一图像中的位置。
在公开的一些实施例中,所述获取部分21,还被配置为获取黑名单库;从至少一张第三图像中确定包含所述黑名单库中的对象的图像,作为所述第二图像,所述至少一张第三图像由所述至少一个摄像头采集得到。
在本公开的一些实施例中,所述获取部分21,还被配置为获取所述目标对象在所述第一图像中的第二位置,所述第二位置与所述第一位置不同;依据所述第一位置和所述第二位置,在所述第一图像中显示所述目标对象的第一轨迹,所述第一轨迹包括可行驶轨迹。
在本公开的一些实施例中,所述获取部分21,还被配置为在所述依据所述第一位置和所述第二位置,在所述第一图像中显示所述目标对象的第一轨迹之前,获取所述第一图像中的至少一条第二轨迹,所述至少一条第二轨迹均包括所述可行驶轨迹;在所述至少一条第二轨迹中目标轨迹的轨迹点包括所述第一位置和所述第二位置的情况下,确定所述目标轨迹为所述第一轨迹。
在本公开的一些实施例中,所述至少一条第二轨迹包括第三轨迹,所述获取部分21,还被配置为在检测到针对所述第一图像的轨迹设置指令的情况下,显示轨迹设置页面,所述轨迹设置页面包括所述第一图像和所述第一场景内的至少一个摄像头;所述第一场景内的至少一个摄像头包括第二摄像头和与所述第二摄像头不同的第三摄像头;在检测到将所述第二摄像头作为起始轨迹点的指令的情况下,将所述第二摄像头在所述第一图像中的位置作为起始轨迹点;在检测到将所述第三摄像头作为终止轨迹点的指令的情况下,将所述第三摄像头在所述第一图像中的位置作为终止轨迹点;将针对所述起始轨迹点和所述终止轨迹点输入的轨迹,作为所述第三轨迹。
在本公开的一些实施例中,所述至少一个摄像头还包括第四摄像头,所述第四摄像头与所述第二摄像头不同,且所述第四摄像头与所述第三摄像头不同;
所述确定部分22,还被配置为在将针对所述起始轨迹点和所述终止轨迹点输入的轨迹,作为所述第三轨迹之前,在检测到将所述第四摄像头作为中间轨迹点的指令的情况下,将所述第四摄像头在所述第一图像中的位置作为中间轨迹点;
所述获取部分21,还被配置为将针对所述起始轨迹点、所述中间轨迹点 和所述终止轨迹点输入的轨迹,作为所述第三轨迹,所述第三轨迹的端点分别为所述起始轨迹点和所述终止轨迹点,且所述第三轨迹包括所述中间轨迹点。
在本公开的一些实施例中,所述图像处理装置2还包括:显示部分23,被配置为在所述得到所述第三轨迹之后,按第一预设显示方式在所述第一图像中显示所述起始轨迹点和所述终止轨迹点。
在本公开的一些实施例中,所述获取部分21,还被配置为获取第二场景的第四图像,所述第一场景包括所述第二场景;
所述显示部分23,还被配置为按第二预设显示方式显示所述第一图像的缩略图和所述第四图像的缩略图。
在本公开的一些实施例中,所述第一图像的缩略图和所述第四图像的缩略图显示于显示页面的第一区域,所述显示页面还包括与所述第一区域不同的第二区域;
所述显示部分23,还被配置为在检测到点击所述第四图像的缩略图的情况下,在所述第二区域内显示所述第四图像。
在本公开的一些实施例中,所述获取部分21,还被配置为获取第二场景的第四图像,所述第一场景包括所述第二场景;基于所述第一图像和所述第四图像,得到所述第一场景的缩略图。
在本公开的一些实施例中,所述获取部分21,还被配置为在检测到标注摄像头位置的指令的情况下,在标注页面显示所述第一场景内的至少一个摄像头和所述第一图像;在检测到将所述至少一个摄像头移动至所述第一图像的移动指令的情况下,依据所述移动指令,得到所述第一场景中的至少一个摄像头在所述第一图像中的位置信息。
在本公开的一些实施例中,所述获取部分21,还被配置为在检测到标注摄像头位置的指令的情况下,在标注页面显示所述第一场景内的至少一个摄像头;
所述显示部分23,还被配置为在检测到针对所述至少一个摄像头的位置输入指令的情况下,显示位置输入框;
所述获取部分21,还被配置为依据所述位置输入框内的位置,得到所述至少一个摄像头在第一图像中的位置信息。
在本公开的一些实施例中,所述显示部分23,还被配置为在检测到点击所述至少一个摄像头中的目标摄像头的情况下,按所述第一预设显示方式在所述第一图像中显示所述目标摄像头。
在本公开的一些实施例中,所述标注页面还包括以下至少一种信息:所 述至少一个摄像头是否已在所述第一图像中标记、所述至少一个摄像头名称、所述至少一个摄像头的类型、所述至少一个摄像头的预览按钮、所述至少一个摄像头的详情信息查看按钮。
在本公开的一些实施例中,所述显示部分23,还被配置为在检测到点击所述至少一个摄像头的详情信息查看按钮的情况下,显示以下至少一种信息:所述第一场景的名称、所述至少一个摄像头的接入状态、所述至少一个摄像头所拍摄的对象的移动方向。
在本公开的一些实施例中,所述显示部分23,还被配置为在检测到点击所述至少一个摄像头的预览按钮的情况下,显示所述至少一个摄像头的采集画面。
在本公开的一些实施例中,所述第一图像为所述第一场景的鸟瞰图。
本公开实施例中,图像处理装置依据采集目标对象的第二图像的第一摄像头在第一图像中的位置,确定目标对象在第一图像中的位置,从而提高在第一场景中定位的准确度。
在本公开的一些实施例中,本公开实施例提供的装置具有的功能或包含的模块可以用于执行上文方法实施例描述的方法,其实现可以参照上文方法实施例的描述。
在本公开实施例以及其他的实施例中,“部分”可以是部分电路、部分处理器、部分程序或软件等等,当然也可以是单元,还可以是模块也可以是非模块化的。
图12为本公开实施例提供的一种电子设备的硬件结构示意图。该电子设备3包括处理器31,存储器32,输入装置33,输出装置34。该处理器31、存储器32、输入装置33和输出装置34通过连接器相耦合,该连接器包括各类接口、传输线或总线等等,本公开实施例对此不作限定。应当理解,本公开的各个实施例中,耦合是指通过特定方式的相互联系,包括直接相连或者通过其他设备间接相连,例如可以通过各类接口、传输线、总线等相连。
处理器31可以是一个或多个图形处理器(graphics processing unit,GPU),在处理器31是一个GPU的情况下,该GPU可以是单核GPU,也可以是多核GPU。示例性的,处理器31可以是多个GPU构成的处理器组,多个处理器之间通过一个或多个总线彼此耦合。在一些实施例中,该处理器还可以为其他类型的处理器等等,本公开实施例不作限定。
存储器32可用于存储计算机程序指令,以及用于执行本公开方案的程序代码在内的各类计算机程序代码。可选地,存储器包括但不限于是随机存储记忆体(random access memory,RAM)、只读存储器(read-only memory,ROM)、 可擦除可编程只读存储器(erasable programmable read only memory,EPROM)、或便携式只读存储器(compact disc read-only memory,CD-ROM),该存储器用于相关指令及数据。
输入装置33用于输入数据和信号中的至少之一,以及输出装置34用于输出数据和信号中的至少之一。输入装置33和输出装置34可以是独立的器件,也可以是一个整体的器件。
可理解,本公开实施例中,存储器32不仅可用于存储相关指令,还可用于存储相关数据,如该存储器32可用于存储通过输入装置33获取的第一图像,又或者该存储器32还可用于存储通过处理器31得到的第一位置等等,本公开实施例对于该存储器中所存储的数据不作限定。
可以理解的是,图12仅仅示出了一种图像处理装置的简化设计。在实际应用中,图像处理装置还可以分别包含必要的其他元件,包含但不限于任意数量的输入/输出装置、处理器、存储器等,而所有可以实现本公开实施例的图像处理装置都在本公开的保护范围之内。
本公开实施例还提供了一种计算机可读存储介质,所述计算机可读存储介质中存储有计算机程序,所述计算机程序包括程序指令,在所述程序指令被处理器执行的情况下,使所述处理器执行上述图像处理方法。
本公开实施例还提供了一种计算机程序产品,所述计算机程序产品包括计算机程序或指令,在所述计算机程序或指令在计算机上运行的情况下,使得所述计算机执行上述图像处理方法。
另外,本公开实施例还提供了一种计算机程序,包括计算机可读代码,当所述计算机可读代码在电子设备中运行时,所述计算机设备中的处理器执行用于实现上述图像处理方法。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的部分及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本公开的范围。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的***、装置和部分的具体工作过程,可以参考前述方法实施例中的对应过程。所属领域的技术人员还可以清楚地了解到,本公开各个实施例描述各有侧重,为描述的方便和简洁,相同或类似的部分在不同实施例中可能没有赘述,因此,在某一实施例未描述或未详细描述的部分可以参见其他实施例的记载。
在本公开所提供的几个实施例中,应该理解到,所揭露的***、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述部分的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个部分或组件可以结合或者可以集成到另一个***,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或部分的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的部分可以是或者也可以不是物理上分开的,作为部分显示的部件可以是或者也可以不是物理部分,即可以位于一个地方,或者也可以分布到多个网络部分上。可以根据实际的需要选择其中的部分或者全部部分来实现本实施例方案的目的。
另外,在本公开各个实施例中的各功能部分可以集成在一个处理部分中,也可以是各个部分单独物理存在,也可以两个或两个以上部分集成在一个部分中。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本公开实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者通过所述计算机可读存储介质进行传输。所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(digital subscriber line,DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质(计算机可读存储介质)可以是可以保持和存储由指令执行设备使用的指令的有形设备,可为易失性存储介质或非易失性存储介质。计算机可读存储介质例如可以是(但不限于)电存储设备、磁存储设备、光存储设备、电磁存储设备、半导体存储设备或者上述的任意合适的组合。计算机可读存储介质的更具体的例子(非穷举的列表)包括:便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、静态随机存取存储器(SRAM)、便携式压缩盘只读存储器(CD-ROM)、数字多功能盘(DVD)、记忆棒、软盘、机械编码设备、例如其上存储有指令的打孔卡或 凹槽内凸起结构、以及上述的任意合适的组合。这里所使用的计算机可读存储介质不被解释为瞬时信号本身,诸如无线电波或者其他自由传播的电磁波、通过波导或其他传输媒介传播的电磁波(例如,通过光纤电缆的光脉冲)、或者通过电线传输的电信号。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,该流程可以由计算机程序来指令相关的硬件完成,该程序可存储于计算机可读取存储介质中,该程序在执行时,可包括如上述各方法实施例的流程。而前述的存储介质包括:只读存储器(read-only memory,ROM)或随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可存储程序代码的介质。
工业实用性
本公开实施例公开了一种图像处理方法及装置、电子设备、计算机可读存储介质、计算机程序及计算机程序产品。该方法包括:获取第一场景的第一图像以及所述第一场景中的至少一个摄像头在所述第一图像中的位置信息,所述第一场景为地图缺失场景;获取目标对象的第二图像,所述第二图像由所述第一场景内的第一摄像头采集得到;依据所述至少一个摄像头在所述第一图像中的位置信息,确定所述第一摄像头在所述第一图像中的第一位置;依据所述第一位置,确定所述目标对象在所述第一图像中的位置。可以在地图缺失场景的细节信息的情况下,实现定位,且定位准确度高。

Claims (22)

  1. 一种图像处理方法,所述方法包括:
    获取第一场景的第一图像以及所述第一场景中的至少一个摄像头在所述第一图像中的位置信息,所述第一场景为地图缺失场景;
    获取目标对象的第二图像,所述第二图像由所述第一场景内的第一摄像头采集得到;
    依据所述至少一个摄像头在所述第一图像中的位置信息,确定所述第一摄像头在所述第一图像中的第一位置;
    依据所述第一位置,确定所述目标对象在所述第一图像中的位置。
  2. 根据权利要求1所述的方法,其中,所述获取目标对象的第二图像,包括:
    获取黑名单库;
    从至少一张第三图像中确定包含所述黑名单库中的对象的图像,作为所述第二图像,所述至少一张第三图像由所述至少一个摄像头采集得到。
  3. 根据权利要求1或2所述的方法,其中,所述方法还包括:
    获取所述目标对象在所述第一图像中的第二位置,所述第二位置与所述第一位置不同;
    依据所述第一位置和所述第二位置,在所述第一图像中显示所述目标对象的第一轨迹,所述第一轨迹包括可行驶轨迹。
  4. 根据权利要求3所述的方法,其中,所述依据所述第一位置和所述第二位置,在所述第一图像中显示所述目标对象的第一轨迹之前,所述方法还包括:
    获取所述第一图像中的至少一条第二轨迹,所述至少一条第二轨迹均包括所述可行驶轨迹;
    所述依据所述第一位置和所述第二位置,在所述第一图像中显示所述目标对象的第一轨迹,包括:
    在所述至少一条第二轨迹中目标轨迹的轨迹点包括所述第一位置和所述第二位置的情况下,确定所述目标轨迹为所述第一轨迹。
  5. 根据权利要求4所述的方法,其中,所述至少一条第二轨迹包括第三轨迹,所述获取所述第一图像中的至少一条第二轨迹,包括:
    在检测到针对所述第一图像的轨迹设置指令的情况下,显示轨迹设置页面,所述轨迹设置页面包括所述第一图像和所述第一场景内的至少一个摄像头;所述第一场景内的至少一个摄像头包括第二摄像头和与所述第二摄像头 不同的第三摄像头;
    在检测到将所述第二摄像头作为起始轨迹点的指令的情况下,将所述第二摄像头在所述第一图像中的位置作为起始轨迹点;
    在检测到将所述第三摄像头作为终止轨迹点的指令的情况下,将所述第三摄像头在所述第一图像中的位置作为终止轨迹点;
    将针对所述起始轨迹点和所述终止轨迹点输入的轨迹,作为所述第三轨迹。
  6. 根据权利要求5所述的方法,其中,所述至少一个摄像头还包括第四摄像头,所述第四摄像头与所述第二摄像头不同,且所述第四摄像头与所述第三摄像头不同;
    所述将针对所述起始轨迹点和所述终止轨迹点输入的轨迹,作为所述第三轨迹之前,所述方法还包括:
    在检测到将所述第四摄像头作为中间轨迹点的指令的情况下,将所述第四摄像头在所述第一图像中的位置作为中间轨迹点;
    所述将针对所述起始轨迹点和所述终止轨迹点输入的轨迹,作为所述第三轨迹,包括:
    将针对所述起始轨迹点、所述中间轨迹点和所述终止轨迹点输入的轨迹,作为所述第三轨迹,所述第三轨迹的端点分别为所述起始轨迹点和所述终止轨迹点,且所述第三轨迹包括所述中间轨迹点。
  7. 根据权利要求5或6所述的方法,其中,所述得到所述第三轨迹之后,所述方法还包括:
    按第一预设显示方式在所述第一图像中显示所述起始轨迹点和所述终止轨迹点。
  8. 根据权利要求1至7中任意一项所述的方法,其中,所述方法还包括:
    获取第二场景的第四图像,所述第一场景包括所述第二场景;
    按第二预设显示方式显示所述第一图像的缩略图和所述第四图像的缩略图。
  9. 根据权利要求8所述的方法,其中,所述第一图像的缩略图和所述第四图像的缩略图显示于显示页面的第一区域,所述显示页面还包括与所述第一区域不同的第二区域;
    所述方法还包括:
    在检测到点击所述第四图像的缩略图的情况下,在所述第二区域内显示所述第四图像。
  10. 根据权利要求1至7中任意一项所述的方法,其中,所述方法还包 括:
    获取第二场景的第四图像,所述第一场景包括所述第二场景;
    基于所述第一图像和所述第四图像,得到所述第一场景的缩略图。
  11. 根据权利要求1至10中任意一项所述的方法,其中,所述获取所述第一场景中的至少一个摄像头在所述第一图像中的位置信息,包括:
    在检测到标注摄像头位置的指令的情况下,在标注页面显示所述第一场景内的至少一个摄像头和所述第一图像;
    在检测到将所述至少一个摄像头移动至所述第一图像的移动指令的情况下,依据所述移动指令,得到所述第一场景中的至少一个摄像头在所述第一图像中的位置信息。
  12. 根据权利要求1至10中任意一项所述的方法,其中,所述获取所述第一场景中的至少一个摄像头在所述第一图像中的位置信息,包括:
    在检测到标注摄像头位置的指令的情况下,在标注页面显示所述第一场景内的至少一个摄像头;
    在检测到针对所述至少一个摄像头的位置输入指令的情况下,显示位置输入框;
    依据所述位置输入框内的位置,得到所述至少一个摄像头在第一图像中的位置信息。
  13. 根据权利要求11或12所述的方法,其中,所述得到所述至少一个摄像头在第一图像中的位置信息之后,所述方法还包括:
    在检测到点击所述至少一个摄像头中的目标摄像头的情况下,按所述第一预设显示方式在所述第一图像中显示所述目标摄像头。
  14. 根据权利要求11至13中任意一项所述的方法,其中,所述标注页面还包括以下至少一种信息:所述至少一个摄像头是否已在所述第一图像中标记、所述至少一个摄像头名称、所述至少一个摄像头的类型、所述至少一个摄像头的预览按钮、所述至少一个摄像头的详情信息查看按钮。
  15. 根据权利要求14所述的方法,其中,在检测到点击所述至少一个摄像头的详情信息查看按钮的情况下,显示以下至少一种信息:所述第一场景的名称、所述至少一个摄像头的接入状态、所述至少一个摄像头所拍摄的对象的移动方向。
  16. 根据权利要求14或15所述的方法,其中,在检测到点击所述至少一个摄像头的预览按钮的情况下,显示所述至少一个摄像头的采集画面。
  17. 根据权利要求1至16中任意一项所述的方法,其中,所述第一图像为所述第一场景的鸟瞰图。
  18. 一种图像处理装置,所述装置包括:
    获取部分,被配置为获取第一场景的第一图像以及所述第一场景中的至少一个摄像头在所述第一图像中的位置信息,所述第一场景为地图缺失场景;
    所述获取部分,被配置为获取目标对象的第二图像,所述第二图像由所述第一场景内的第一摄像头采集得到;
    确定部分,被配置为依据所述至少一个摄像头在所述第一图像中的位置信息,确定所述第一摄像头在所述第一图像中的第一位置;
    所述确定部分,被配置为依据所述第一位置,确定所述目标对象在所述第一图像中的位置。
  19. 一种电子设备,包括:处理器和存储器,所述存储器用于存储计算机程序代码,所述计算机程序代码包括计算机指令,在所述处理器执行所述计算机指令的情况下,所述电子设备执行如权利要求1至17中任意一项所述的方法。
  20. 一种计算机可读存储介质,所述计算机可读存储介质中存储有计算机程序,所述计算机程序包括程序指令,在所述程序指令被处理器执行的情况下,使所述处理器执行权利要求1至17中任意一项所述的方法。
  21. 一种计算机程序,包括计算机可读代码,当所述计算机可读代码在电子设备中运行时,所述计算机设备中的处理器执行用于实现如权利要求1至17任一项所述的图像处理方法。
  22. 一种计算机程序产品,所述计算机程序产品包括计算机程序或指令,在所述计算机程序或指令在计算机上运行的情况下,使得所述计算机执行如权利要求1至17任一项所述的图像处理方法。
PCT/CN2022/105205 2021-07-30 2022-07-12 图像处理方法及装置、电子设备、计算机可读存储介质、计算机程序及计算机程序产品 WO2023005659A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110874129.5A CN113592918A (zh) 2021-07-30 2021-07-30 图像处理方法及装置、电子设备及计算机可读存储介质
CN202110874129.5 2021-07-30

Publications (1)

Publication Number Publication Date
WO2023005659A1 true WO2023005659A1 (zh) 2023-02-02

Family

ID=78252976

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/105205 WO2023005659A1 (zh) 2021-07-30 2022-07-12 图像处理方法及装置、电子设备、计算机可读存储介质、计算机程序及计算机程序产品

Country Status (2)

Country Link
CN (1) CN113592918A (zh)
WO (1) WO2023005659A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113592918A (zh) * 2021-07-30 2021-11-02 深圳市商汤科技有限公司 图像处理方法及装置、电子设备及计算机可读存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109886078A (zh) * 2018-12-29 2019-06-14 华为技术有限公司 目标对象的检索定位方法和装置
CN110113534A (zh) * 2019-05-13 2019-08-09 Oppo广东移动通信有限公司 一种图像处理方法、图像处理装置及移动终端
CN112925941A (zh) * 2021-03-25 2021-06-08 深圳市商汤科技有限公司 数据处理方法及装置、电子设备及计算机可读存储介质
CN113592918A (zh) * 2021-07-30 2021-11-02 深圳市商汤科技有限公司 图像处理方法及装置、电子设备及计算机可读存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109886078A (zh) * 2018-12-29 2019-06-14 华为技术有限公司 目标对象的检索定位方法和装置
CN110113534A (zh) * 2019-05-13 2019-08-09 Oppo广东移动通信有限公司 一种图像处理方法、图像处理装置及移动终端
CN112925941A (zh) * 2021-03-25 2021-06-08 深圳市商汤科技有限公司 数据处理方法及装置、电子设备及计算机可读存储介质
CN113592918A (zh) * 2021-07-30 2021-11-02 深圳市商汤科技有限公司 图像处理方法及装置、电子设备及计算机可读存储介质

Also Published As

Publication number Publication date
CN113592918A (zh) 2021-11-02

Similar Documents

Publication Publication Date Title
WO2021010660A1 (en) System and method for augmented reality scenes
US20220319139A1 (en) Multi-endpoint mixed-reality meetings
WO2018188499A1 (zh) 图像、视频处理方法和装置、虚拟现实装置和存储介质
US10249089B2 (en) System and method for representing remote participants to a meeting
JP5807686B2 (ja) 画像処理装置、画像処理方法及びプログラム
CN108347657B (zh) 一种显示弹幕信息的方法和装置
US11893702B2 (en) Virtual object processing method and apparatus, and storage medium and electronic device
CN111080799A (zh) 基于三维建模的场景漫游方法、***、装置和存储介质
WO2021213067A1 (zh) 物品显示方法、装置、设备及存储介质
WO2020017890A1 (en) System and method for 3d association of detected objects
US9996960B2 (en) Augmented reality system and method
WO2021052392A1 (zh) 三维场景的渲染方法、装置及电子设备
US10437342B2 (en) Calibration systems and methods for depth-based interfaces with disparate fields of view
WO2021162201A1 (en) Click-and-lock zoom camera user interface
WO2023005659A1 (zh) 图像处理方法及装置、电子设备、计算机可读存储介质、计算机程序及计算机程序产品
US20140247209A1 (en) Method, system, and apparatus for image projection
CN113129362A (zh) 一种三维坐标数据的获取方法及装置
WO2023273154A1 (zh) 图像处理方法、装置、设备、介质及程序
US20200226833A1 (en) A method and system for providing a user interface for a 3d environment
WO2023273155A1 (zh) 图像处理方法及装置、电子设备、计算机可读存储介质及计算机程序产品
JP6699406B2 (ja) 情報処理装置、プログラム、位置情報作成方法、情報処理システム
WO2022121243A1 (zh) 标定方法及装置、电子设备、存储介质及程序产品
CN114494960A (zh) 视频处理方法及装置、电子设备及计算机可读存储介质
CN113421343A (zh) 基于增强现实观测设备内部结构的方法
JP6304305B2 (ja) 画像処理装置、画像処理方法及びプログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22848264

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE