CN109961471A - A kind of mask method, device and the electronic equipment of objects in images position - Google Patents

A kind of mask method, device and the electronic equipment of objects in images position Download PDF

Info

Publication number
CN109961471A
CN109961471A CN201711340685.4A CN201711340685A CN109961471A CN 109961471 A CN109961471 A CN 109961471A CN 201711340685 A CN201711340685 A CN 201711340685A CN 109961471 A CN109961471 A CN 109961471A
Authority
CN
China
Prior art keywords
model
marked
camera model
posture information
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201711340685.4A
Other languages
Chinese (zh)
Other versions
CN109961471B (en
Inventor
王旭
张彦刚
马星辰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Orion Star Technology Co Ltd
Original Assignee
Beijing Orion Star Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Orion Star Technology Co Ltd filed Critical Beijing Orion Star Technology Co Ltd
Priority to CN201711340685.4A priority Critical patent/CN109961471B/en
Publication of CN109961471A publication Critical patent/CN109961471A/en
Application granted granted Critical
Publication of CN109961471B publication Critical patent/CN109961471B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the invention provides mask method, device and the electronic equipments of a kind of objects in images position, which comprises obtains the current posture information of the camera model constructed in advance;Obtain the current posture information and physical parameter of the object model to be marked constructed in advance;Object pose information of the object model to be marked in camera model coordinate system is obtained by coordinate transform according to the current posture information of camera model and the current posture information of object model to be marked;According to the inner parameter matrix of camera model, physical parameter and object pose information, location of pixels of the object model to be marked in camera model current acquired image is determined;Location of pixels is marked in the picture.The location of pixels of object to be marked in the image of video camera acquisition can be labeled, remove artificial mark work from, and can quickly change the pose of object to be marked, the image after obtaining a large amount of marks promotes annotating efficiency under virtual environment.

Description

A kind of mask method, device and the electronic equipment of objects in images position
Technical field
The present invention relates to technical field of image processing, mask method, dress more particularly to a kind of objects in images position It sets and electronic equipment.
Background technique
With the continuous improvement of computer computation ability, various deep learning models using more and more extensive.For scheming The deep learning model of picture processing is one of the deep learning model played a significant role, for example, grabbing object, vehicle in mechanical arm The fields such as the target detection in board identification, monitoring video, the deep learning model for image procossing have very importantly Position.
It in these deep learning models of training, needs to acquire a large amount of image pattern, i.e., target object is carried out various Angle is shot to position, and then acquires great amount of images, wherein target object is the object that detection is actually needed, for example, machine Tool arm needs object, the license plate of vehicle etc. grabbed.And in these images, it needs to mark out the position of target object, marks Image afterwards is as image pattern, for training deep learning model.
The generally artificial mark of the mode of label target object space, i.e., in the image of acquisition, determine mesh by human eye The position of object is marked, and then is labeled, image sheet is obtained.As it can be seen that this mode is very waste of manpower and time, mark Efficiency is very low.
Summary of the invention
Mask method, device and the electronics for being designed to provide a kind of objects in images position of the embodiment of the present invention are set It is standby, to remove artificial mark work from, promote annotating efficiency.Specific technical solution is as follows:
In a first aspect, the embodiment of the invention provides a kind of mask methods of objects in images position, which comprises
Obtain the current posture information of the camera model constructed in advance;
Obtain the current posture information and physical parameter of the object model to be marked constructed in advance, wherein the physics ginseng Number is the parameter for identifying the object model size to be marked;
According to the current posture information of the camera model and the current posture information of the object model to be marked, lead to Coordinate transform is crossed, object pose information of the object model to be marked in the camera model coordinate system is obtained;
According to the inner parameter matrix of the camera model, the physical parameter and the object pose information, determine Location of pixels of the object model to be marked in the camera model current acquired image;
The location of pixels is marked in described image.
Optionally, it is described obtain construct in advance camera model current posture information the step of before, the side Method further include:
Obtain the current posture information of the manipulator model constructed in advance, wherein the camera model and the machinery The end of arm model is fixedly connected;
The step of current posture information for obtaining the camera model constructed in advance, comprising:
According to the current posture information of the manipulator model, the current posture information of camera model is determined.
Optionally, the physical parameter include: the default mark point of the object model to be marked position and it is described to Mark the volume parameter of object model;
It is described according to the inner parameter matrix of the camera model, the physical parameter and the object pose information, The step of determining location of pixels of the object model to be marked in the camera model current acquired image, comprising:
According to the inner parameter matrix of the camera model and the object pose information, the default mark point is determined First object position of the position in the camera model current acquired image;
According to the first object position and the volume parameter, determine the object model to be marked in the video camera Location of pixels in model current acquired image.
Optionally, the default mark point is the bottom left vertex of the object model to be marked;
It is described according to the first object position and the volume parameter, determine that the object model to be marked is taken the photograph described The step of location of pixels in camera model current acquired image, comprising:
According to the first object position and the volume parameter, determine that the right vertices of the object model to be marked exist The second target position in the camera model current acquired image;
By using the first object position and second target position as the region in the rectangle frame of angle steel joint, it is determined as Location of pixels of the object model to be marked in the camera model current acquired image.
Optionally, after described the step of marking the location of pixels in described image, the method also includes:
Corresponding to the camera model current acquired image, the location of pixels is recorded, the camera model is worked as The current posture information of preceding posture information and the object model to be marked.
Optionally, the method also includes:
Change the pose of the camera model and/or the object model to be marked, and returns to the preparatory structure of acquisition The step of current posture information for the camera model built.
Second aspect, the embodiment of the invention provides a kind of annotation equipment of objects in images position, described device includes:
Camera model data acquisition module, for obtaining the current posture information of the camera model constructed in advance;
Object model data acquisition module to be marked, for obtaining the current pose of the object model to be marked constructed in advance Information and physical parameter, wherein the physical parameter is to identify the parameter of the object model size to be marked;
Object pose information determination module, for according to the current posture information of the camera model and described to be marked The current posture information of object model is obtained the object model to be marked and is sat in the camera model by coordinate transform Object pose information in mark system;
Location of pixels determining module, for according to the inner parameter matrix of the camera model, the physical parameter and The object pose information determines pixel position of the object model to be marked in the camera model current acquired image It sets;
Location of pixels labeling module, for marking the location of pixels in described image.
Optionally, described device further include:
Manipulator model data acquisition module, in the current pose letter for obtaining the camera model constructed in advance Before breath, the current posture information of the manipulator model constructed in advance is obtained, wherein the camera model and the mechanical arm The end of model is fixedly connected;
The camera model data acquisition module includes:
Current pose information acquisition unit determines video camera for the current posture information according to the manipulator model The current posture information of model.
Optionally, the physical parameter include: the default mark point of the object model to be marked position and it is described to Mark the volume parameter of object model;
The location of pixels determining module includes:
First object position determination unit, for according to the camera model inner parameter matrix and the target position Appearance information determines first object position of the position of the default mark point in the camera model current acquired image;
Location of pixels determination unit, for determining described wait mark according to the first object position and the volume parameter Infuse location of pixels of the object model in the camera model current acquired image.
Optionally, the default mark point is the bottom left vertex of the object model to be marked;
The location of pixels determination unit includes:
Second target position determines subelement, for determining institute according to the first object position and the volume parameter State second target position of the right vertices of object model to be marked in the camera model current acquired image;
Location of pixels determines subelement, and being used for will be using the first object position and second target position as angle steel joint Rectangle frame in region, be determined as pixel of the object model to be marked in the camera model current acquired image Position.
Optionally, described device further include:
Information logging modle, for it is described the location of pixels is marked in described image after, taken the photograph corresponding to described Camera model current acquired image records the location of pixels, the current posture information of the camera model and described wait mark Infuse the current posture information of object model.
Optionally, described device further include:
Pose changes module, for changing the camera model and/or the pose of the object model to be marked, and touches Send out camera model data acquisition module described.
The third aspect, the embodiment of the invention also provides a kind of electronic equipment, including processor, memory and communication are total Line, wherein processor, memory complete mutual communication by communication bus;
Memory, for storing computer program;
Processor when for executing the program stored on memory, realizes the mark of above-mentioned objects in images position Method and step.
Fourth aspect, the embodiment of the invention also provides a kind of computer readable storage mediums, which is characterized in that the meter Computer program is stored in calculation machine readable storage medium storing program for executing, the computer program realizes above-mentioned image when being executed by processor The mask method step of middle object space.
In scheme provided by the embodiment of the present invention, the current pose letter of the camera model constructed in advance is obtained first It ceases, and the current posture information and physical parameter of the object model to be marked constructed in advance, then according to camera model Current posture information and the current posture information of object model to be marked are obtained object model to be marked and are existed by coordinate transform Object pose information in camera model coordinate system, further according to inner parameter matrix, physical parameter and the mesh of camera model Posture information is marked, location of pixels of the object model to be marked in camera model current acquired image is determined, finally in image Middle mark location of pixels.The location of pixels of object to be marked in the image of video camera acquisition can be carried out under virtual environment Mark removes artificial mark work from, and can quickly change the pose of object to be marked, the image after obtaining a large amount of marks, pole The earth promotes image labeling efficiency.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this Some embodiments of invention for those of ordinary skill in the art without creative efforts, can be with It obtains other drawings based on these drawings.
Fig. 1 is a kind of flow chart of the mask method of objects in images position provided by the embodiment of the present invention;
Fig. 2 is the schematic diagram that a kind of camera model is installed on manipulator model end;
Fig. 3 is the specific flow chart of step S104 in embodiment illustrated in fig. 1;
Fig. 4 is the specific flow chart of step S302 in embodiment illustrated in fig. 3;
Fig. 5 is a kind of schematic diagram of the camera model acquisition image after mark;
Fig. 6 is a kind of structural schematic diagram of the annotation equipment of objects in images position provided by the embodiment of the present invention;
Fig. 7 is the structural schematic diagram of a kind of electronic equipment provided by the embodiment of the present invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts every other Embodiment shall fall within the protection scope of the present invention.
In order to when marking objects in images position, remove artificial mark work from, image labeling efficiency is promoted, the present invention is real It applies example and provides mask method, device, electronic equipment and the computer readable storage medium of a kind of objects in images position.
The mask method for being provided for the embodiments of the invention a kind of objects in images position first below is introduced.
A kind of mask method of objects in images position provided by the embodiment of the present invention can be applied to need to carry out figure Any electronic equipment of object space mark as in, for example, can be the electronic equipments such as computer, tablet computer, mobile phone, herein not It is specifically limited, hereinafter referred to as electronic equipment.
In order to more easily carry out the mark work of objects in images position, electronic equipment can pass through Gazebo, CAD Etc. application programs, obtain camera model and object model to be marked.Camera model and object model to be marked can bases Actual needs building in advance, for example, if the image after mark is used to train the deep learning model of mechanical arm crawl object, that , general object to be marked is the target object that mechanical arm needs to grab, and object model to be marked can be according to mechanical arm The parameters such as true form, the size of the target object for needing to grab construct to obtain.Similarly, in this case, video camera mould Type can construct to obtain according to the actual inside parameter and external parameter for the video camera for being installed on mechanical arm tail end.
As shown in Figure 1, a kind of mask method of objects in images position, which comprises
S101 obtains the current posture information of the camera model constructed in advance;
S102 obtains the current posture information and physical parameter of the object model to be marked constructed in advance;
Wherein, the physical parameter is to identify the parameter of the object model size to be marked.
S103 believes according to the current pose of the current posture information of the camera model and the object model to be marked Breath, by coordinate transform, obtains object pose information of the object model to be marked in the camera model coordinate system;
S104, according to the inner parameter matrix of the camera model, the physical parameter and the object pose information, Determine location of pixels of the object model to be marked in the camera model current acquired image;
S105 marks the location of pixels in described image.
As it can be seen that electronic equipment obtains the camera model constructed in advance first in scheme provided by the embodiment of the present invention Current posture information, and the current posture information and physical parameter of the object model to be marked that construct in advance, then basis The current posture information of camera model and the current posture information of object model to be marked are obtained by coordinate transform wait mark Object pose information of the object model in camera model coordinate system is infused, further according to inner parameter matrix, physical parameter and mesh Posture information is marked, location of pixels of the object model to be marked in camera model current acquired image is determined, finally in image Middle mark location of pixels.The location of pixels of object to be marked in the image of video camera acquisition can be carried out under virtual environment Mark removes artificial mark work from, and can quickly change the pose of object to be marked, the image after obtaining a large amount of marks, pole The earth promotes image labeling efficiency.
It should be noted that there is no successive limitations for the execution sequence of above-mentioned steps S101, step S102, can first hold Row step S101, can also first carry out step S102, and certainly, the two also may be performed simultaneously, this is all reasonably, as long as electronics The current posture information of the available camera model constructed in advance of equipment and the object model to be marked constructed in advance Current posture information and physical parameter, step S101, the execution sequence of step S102 can't cause subsequent step to appoint What is influenced.
Wherein, posture information may include three dimensional local information and 3 d pose information in world coordinate system.It can With understanding, in virtual environment, after camera model determines, its pose can be arbitrarily adjusted, for example, camera model Rotation, translation etc., in turn, the posture information of camera model can also be got, the 3 d pose information of camera model It may include the information such as the optical axis direction of camera model.
And the physical parameter of object model to be marked can be that can be identified for that the parameter of object model size to be marked.Example Such as, object model to be marked is a cylindrical cup submodel, then the physical parameter of object model to be marked can be cup mould The information such as the height of the diameter of type bottom circular, central coordinate of circle and cup model.In another example object model to be marked is one long The box model of cube, then the physical parameter of object model to be marked can be length and certain vertex of box model The information such as coordinate or center point coordinate.
In turn, in above-mentioned steps S103, electronic equipment can according to the current posture information of camera model and to The current posture information of mark object model obtains object model to be marked in camera model coordinate system by coordinate transform In object pose information.It is understood that camera model determine after, camera model coordinate system be it is known, that After the current posture information of camera model and the current posture information of object model to be marked determine, by coordinate transform, That is projection of the world coordinate system to camera model coordinate system, that is, pass through the coordinate transform of video camera external parameter matrix, It can determine object pose information of the object model to be marked in camera model coordinate system.Also it is assured that be marked Three-dimensional position and 3 d pose information of the object model in camera model coordinate system.
Therefore, in above-mentioned steps S104, electronic equipment can be according to the inner parameter matrix and target of camera model Posture information determines position of the object model to be marked in camera model current acquired image.
It is understood that camera model once it is determined that, inner parameter matrix be it is known, camera model Inner parameter can generally indicate are as follows:
Wherein, fxAnd fyRespectively focal length of the camera model on the direction x and the direction y of camera model coordinate system, cx And cyRespectively principal point coordinate of the camera model on the direction x and the direction y of imaging plane coordinate system, wherein video camera mould The intersection point of type optical axis direction and imaging plane is known as principal point.After camera model determines, inner parameter matrix also can be true It is fixed.
Object pose information is for indicating object model to be marked three-dimensional pose in camera model coordinate system, and image Coordinate system is for indicating position of the object to be marked in the two dimensional image that camera model acquires, the inside ginseng of camera model The effect of matrix number is exactly that linear variation is carried out between the two coordinate systems.
Due to needing location of pixels of the object model to be marked marked in camera model current acquired image general It is: may include an area of the pixel coverage that is covered of the object model to be marked in camera model current acquired image Domain, therefore, electronic equipment generally can be according to the physical parameters and determination object model to be marked of above-mentioned object model to be marked Position in camera model current acquired image determines the region, that is, object model to be marked in camera model Location of pixels in current acquired image.
In turn, electronic equipment can mark the location of pixels in camera model current acquired image, obtain mark Image afterwards can be used as the sample etc. of trained deep learning model.It, can basis for the specific notation methods of location of pixels The processing of successive depths learning model it needs to be determined that, the embodiment of the present invention is not specifically limited herein, for example, can be using highlighted The modes such as display, numeric indicia.
For the case where above-mentioned camera model is fixedly connected with the end of the manipulator model constructed in advance, as A kind of embodiment of the embodiment of the present invention, the current posture information for the camera model that above-mentioned acquisition constructs in advance the step of Before, the above method can also include:
Obtain the current posture information of the manipulator model constructed in advance.
As shown in Fig. 2, in this case, the pose of camera model 21 can changing with the pose of manipulator model 22 Becoming and changes, the rotation in the joint of manipulator model 22, the mobile posture information that will drive camera model 21 change, So the current posture information of the available manipulator model 22 constructed in advance of electronic equipment, after manipulator model 22 determines, Its posture information is also assured that the posture information is also relative to world coordinate system.Wherein, object model 23 and object Any one in model 24 can be used as object model to be marked or the two can also be used as object mould to be marked simultaneously Type.
Correspondingly, the step of current posture information for the camera model that above-mentioned acquisition constructs in advance, may include:
According to the current posture information of the manipulator model, the current posture information of camera model is determined.
After electronic equipment obtains the current posture information of manipulator model, since camera model is in manipulator model end Installation site and angle be it is known, electronic equipment can determine the current posture information of camera model.
As it can be seen that in the present embodiment, for the case where camera model is installed on manipulator model end, Ke Yigen According to the current posture information of manipulator model, the accurate current posture information for obtaining camera model, and then can be quickly obtained It is largely used to train the image pattern of the deep learning model of mechanical arm crawl object, trained mechanical arm crawl can be greatly improved The efficiency of the deep learning model of object.
As a kind of embodiment of the embodiment of the present invention, above-mentioned physical parameter may include: object model to be marked The position of default mark point and the volume parameter of object model to be marked.Wherein, presetting mark point can be according to object to be marked The purposes of image determines after the true form and mark of model, such as can be one on object model outer contour to be marked Point.If object model to be marked is regular shape, such as cube, cylindrical body etc., then default mark point can be for wait mark Infuse a certain vertex of object model.Certainly, it handles for convenience, even if object model to be marked is irregular shape, electronics is set Standby can also be approximately regular shape relatively, and then using a certain vertex of the regular shape as object to be marked The default mark point of model, this is all reasonable.
The volume parameter of object model to be marked is the parameter that can determine object model volume to be marked.When to be marked When object model is regular shape, such as cube, cylindrical body etc., then volume parameter can be that can mathematically determine these The parameters such as the parameter, that is, length of object model volume to be marked, are not specifically limited herein.When object mould to be marked When type is irregular shape, then electronic equipment can also be approximately regular shape relatively, and then by the rule The parameters such as the parameter, such as length that can mathematically indicate its volume of shape are joined as the volume of object model to be marked Number.
Be directed to object model to be marked physical parameter include object model to be marked default mark point position and For the case where volume parameter of object model to be marked, as shown in figure 3, the above-mentioned inner parameter according to the camera model Matrix, the physical parameter and the object pose information determine that the object model to be marked is worked as in the camera model The step of location of pixels in preceding acquisition image, may include:
S301 is determined described default according to the inner parameter matrix of the camera model and the object pose information Mark first object position of the position of point in the camera model current acquired image;
Since the position of default mark point generally uses the position in world coordinate system, so in order to accurately determine first Target position, electronic equipment can be first according to the object pose information of object model to be marked, by the position of default mark point It projects in camera model coordinate system.In another embodiment, electronic equipment can be first according to camera model External parameter matrix projects to the position of default mark point in camera model coordinate system, this is all reasonable.
As it is aforementioned it is found that the inner parameter matrix of camera model indicate camera model coordinate system and image coordinate system it Between linear changing relation, therefore, electronic equipment can according to the inner parameter matrix of camera model, by coordinate transform, Determine that the position of default mark point projects to position corresponding in camera model coordinate system, the position in image coordinate system It sets, that is, the first object position in camera model current acquired image.
S302 determines the object model to be marked described according to the first object position and the volume parameter Location of pixels in camera model current acquired image.
After electronic equipment determines first object position, can according to the volume parameter of object model to be marked, determine to The pixel coverage that mark object model is covered in camera model current acquired image, that is, object model to be marked exist Location of pixels in camera model current acquired image.It should be noted that in order to more accurately determine the pixel coverage, on Volume parameter is stated generally currently to be acquired according to the object to be marked that the image-forming principle of camera model determines in camera model Volume parameter in image.For its method of determination, can using formula either in video camera imaging processing technology field, This is not specifically limited and illustrates.
For example, object model to be marked is a cube, presets the central point that mark point is object model to be marked, Electronic equipment determines that first object position is (25,42), volume of the object to be marked in camera model current acquired image Parameter is the square that side length is 6, then location of pixels of the object model to be marked in camera model current acquired image Just the square area to take (22,39) and (28,45) as angle steel joint.
As it can be seen that in the present embodiment, electronic equipment can determine object to be marked according to default mark point and volume parameter Location of pixels of the model in camera model current acquired image can quickly and accurately determine object model pair to be marked The location of pixels answered further promotes the accuracy rate and efficiency of image labeling.
In order to more quickly determine location of pixels of the object model to be marked in camera model current acquired image, As a kind of embodiment of the embodiment of the present invention, above-mentioned default mark point can be the bottom left vertex of object model to be marked.
Correspondingly, determination is described wait mark as shown in figure 4, above-mentioned according to the first object position and the volume parameter The step of infusing location of pixels of the object model in the camera model current acquired image may include:
S401 determines the upper right of the object model to be marked according to the first object position and the volume parameter Second target position of the vertex in the camera model current acquired image;
It should be noted that being the bottom left vertex of object model to be marked for default mark point in the present embodiment For situation, when object model to be marked is regular shape, what the bottom left vertex of object model to be marked was referred to is to take the photograph In the imaging plane of camera model, the vertex in the object model lower left corner to be marked, when object model to be marked is irregular shape When, then electronic equipment can also be approximately regular shape relatively, the bottom left vertex institute of object model to be marked What is referred to is in the imaging plane of camera model, the vertex in the lower left corner of the regular shape relatively.
So, electronic equipment can determine object model to be marked according to above-mentioned first object position and volume parameter Second target position of the right vertices in camera model current acquired image.Illustratively, as shown in figure 5, it is to be marked Object model 51 is a cuboid, and first object position 52 is (15,20), and object to be marked currently acquires figure in camera model Volume parameter as in is a length of 12, width 8, it is evident that, electronic equipment can easily determine the second target position 53 be (27,28).
S402, by using the first object position and second target position as the region in the rectangle frame of angle steel joint, It is determined as location of pixels of the object model to be marked in the camera model current acquired image.
Due to that can cover wait mark using first object position and the second target position as the region in the rectangle frame of angle steel joint Object model pixel coverage shared in camera model current acquired image is infused, is also compared using rectangle frame as notation methods Relatively meet the training principle of each deep learning model, accordingly, it is determined that behind the second target position, electronic equipment can will with One target position and the second target position are the region in the rectangle frame of angle steel joint, are determined as object model to be marked in video camera Location of pixels in model current acquired image.As shown in figure 5, the region in rectangle frame 54 is just that object model 51 to be marked exists Location of pixels in camera model current acquired image.
As it can be seen that in the present embodiment, electronic equipment can determine the second target position according to first object position, and then will Using first object position and the second target position as the region in the rectangle frame of angle steel joint, it is determined as object model to be marked and is taking the photograph Location of pixels in camera model current acquired image can more quickly and easily determine the pixel position of object model to be marked It sets, further promotes the efficiency of image labeling.
As a kind of embodiment of the embodiment of the present invention, in the above-mentioned step for marking the location of pixels in described image After rapid, the above method can also include:
Corresponding to the camera model current acquired image, the location of pixels is recorded, the camera model is worked as The current posture information of preceding posture information and the object model to be marked.
In order to adapt to the needs of various deep learning models, after electronic equipment marks location of pixels in the picture, may be used also To correspond to camera model current acquired image, record the location of pixels of object model to be marked, camera model it is current The information such as posture information and the current posture information of object model to be marked, type, in this way, in subsequent trained deep learning model When, the various information recorded can be obtained according to actual needs.For recording the side of the location of pixels of object model to be marked Formula can be not specifically limited herein using modes such as vertex, angle steel joint, the side lengths of record callout box.
As a kind of embodiment of the embodiment of the present invention, the above method can also include:
Change the pose of the camera model and/or the object model to be marked, and returns to the preparatory structure of acquisition The step of current posture information for the camera model built.
Image after largely being marked for quick obtaining, after the image that mark completes that camera model currently acquires, electricity Sub- equipment can change the pose of camera model and/or object model to be marked, and return to that above-mentioned acquisition constructs in advance takes the photograph The step of current posture information of camera model, circulation execute the mask method step of above-mentioned objects in images position.
It is understood that camera model is adopted after the pose of camera model and/or object model to be marked changes The position of object model to be marked and/or posture also just change in the image of collection, and in turn, electronic equipment is available a large amount of Image after different marks can be used as image pattern with the training for various deep learning models.
As it can be seen that in the present embodiment, electronic equipment can quickly change camera model and/or object model to be marked Pose, due to change after camera model and/or object model to be marked pose be it is known, can be quickly obtained Image after a large amount of marks, greatly improves the efficiency of image labeling.
Corresponding to a kind of above-mentioned mask method of objects in images position, the embodiment of the invention also provides in a kind of image The annotation equipment of object space.
The annotation equipment for being provided for the embodiments of the invention a kind of objects in images position below is introduced.
As shown in fig. 6, a kind of annotation equipment of objects in images position, described device include:
Camera model data acquisition module 610, for obtaining the current posture information of the camera model constructed in advance;
Object model data acquisition module 620 to be marked, for obtaining the current of the object model to be marked constructed in advance Posture information and physical parameter;
Wherein, the physical parameter is to identify the parameter of the object model size to be marked.
Object pose information determination module 630, for according to the current posture information of the camera model and it is described to The current posture information of mark object model obtains the object model to be marked in the video camera mould by coordinate transform Object pose information in type coordinate system;
Location of pixels determining module 640, for the inner parameter matrix according to the camera model, the physical parameter And the object pose information, determine pixel of the object model to be marked in the camera model current acquired image Position;
Location of pixels labeling module 650, for marking the location of pixels in described image.
As it can be seen that electronic equipment obtains the camera model constructed in advance first in scheme provided by the embodiment of the present invention Current posture information, and the current posture information and physical parameter of the object model to be marked that construct in advance, then basis The current posture information of camera model and the current posture information of object model to be marked are obtained by coordinate transform wait mark Object pose information of the object model in camera model coordinate system is infused, further according to inner parameter matrix, physical parameter and mesh Posture information is marked, location of pixels of the object model to be marked in camera model current acquired image is determined, finally in image Middle mark location of pixels.The location of pixels of object to be marked in the image of video camera acquisition can be carried out under virtual environment Mark removes artificial mark work from, and can quickly change the pose of object to be marked, the image after obtaining a large amount of marks, pole The earth promotes image labeling efficiency.
As a kind of embodiment of the embodiment of the present invention, above-mentioned apparatus can also include:
Manipulator model data acquisition module (is not shown) in Fig. 6, for obtaining the video camera mould constructed in advance described Before the current posture information of type, the current posture information of the manipulator model constructed in advance is obtained, wherein the video camera mould Type is fixedly connected with the end of the manipulator model;
Above-mentioned camera model data acquisition module 610 may include:
Current pose information acquisition unit (being not shown in Fig. 6), for being believed according to the current pose of the manipulator model Breath, determines the current posture information of camera model.
As a kind of embodiment of the embodiment of the present invention, above-mentioned physical parameter may include: the object mould to be marked The position of the default mark point of type and the volume parameter of the object model to be marked;
Above-mentioned location of pixels determining module 640 may include:
First object position determination unit (is not shown) in Fig. 6, for the inner parameter square according to the camera model Battle array and the object pose information determine the position of the default mark point in the camera model current acquired image First object position;
Location of pixels determination unit (is not shown) in Fig. 6, is used for according to the first object position and the volume parameter, Determine location of pixels of the object model to be marked in the camera model current acquired image.
As a kind of embodiment of the embodiment of the present invention, above-mentioned default mark point can be the object model to be marked Bottom left vertex;
Above-mentioned location of pixels determination unit may include:
Second target position determines subelement (being not shown in Fig. 6), for according to the first object position and the body Product parameter, determines second mesh of the right vertices of the object model to be marked in the camera model current acquired image Cursor position;
Location of pixels determines subelement (being not shown in Fig. 6), and being used for will be with the first object position and second mesh Mark is set to the region in the rectangle frame of angle steel joint, is determined as the object model to be marked and currently adopts in the camera model Collect the location of pixels in image.
As a kind of embodiment of the embodiment of the present invention, above-mentioned apparatus can also include:
Information logging modle (is not shown) in Fig. 6, for it is described the location of pixels is marked in described image after, Corresponding to the camera model current acquired image, the current pose letter of the location of pixels, the camera model is recorded The current posture information of breath and the object model to be marked.
As a kind of embodiment of the embodiment of the present invention, above-mentioned apparatus can also include:
Pose changes module (being not shown in Fig. 6), for changing the camera model and/or the object mould to be marked The pose of type, and trigger the camera model data acquisition module 610.
The embodiment of the invention also provides a kind of electronic equipment, as shown in fig. 7, comprises processor 701,702 and of memory Communication bus 703, wherein processor 701, memory 702 complete mutual communication by communication bus 703,
Memory 702, for storing computer program;
Processor 701 when for executing the program stored on memory 702, realizes following steps:
Obtain the current posture information of the camera model constructed in advance;
Obtain the current posture information and physical parameter of the object model to be marked constructed in advance, wherein the physics ginseng Number is the parameter for identifying the object model size to be marked;
According to the current posture information of the camera model and the current posture information of the object model to be marked, lead to Coordinate transform is crossed, object pose information of the object model to be marked in the camera model coordinate system is obtained;
According to the inner parameter matrix of the camera model, the physical parameter and the object pose information, determine Location of pixels of the object model to be marked in the camera model current acquired image;
The location of pixels is marked in described image.
As it can be seen that electronic equipment obtains the camera model constructed in advance first in scheme provided by the embodiment of the present invention Current posture information, and the current posture information and physical parameter of the object model to be marked that construct in advance, then basis The current posture information of camera model and the current posture information of object model to be marked are obtained by coordinate transform wait mark Object pose information of the object model in camera model coordinate system is infused, further according to inner parameter matrix, physical parameter and mesh Posture information is marked, location of pixels of the object model to be marked in camera model current acquired image is determined, finally in image Middle mark location of pixels.The location of pixels of object to be marked in the image of video camera acquisition can be carried out under virtual environment Mark removes artificial mark work from, and can quickly change the pose of object to be marked, the image after obtaining a large amount of marks, pole The earth promotes image labeling efficiency.
The communication bus that above-mentioned electronic equipment is mentioned can be Peripheral Component Interconnect standard (Peripheral Component Interconnect, PCI) bus or expanding the industrial standard structure (Extended Industry Standard Architecture, EISA) bus etc..The communication bus can be divided into address bus, data/address bus, control bus etc..For just It is only indicated with a thick line in expression, figure, it is not intended that an only bus or a type of bus.
Memory may include random access memory (Random Access Memory, RAM), also may include non-easy The property lost memory (Non-Volatile Memory, NVM), for example, at least a magnetic disk storage.Optionally, memory may be used also To be storage device that at least one is located remotely from aforementioned processor.
Above-mentioned processor can be general processor, including central processing unit (Central Processing Unit, CPU), network processing unit (Network Processor, NP) etc.;It can also be digital signal processor (Digital Signal Processing, DSP), it is specific integrated circuit (Application Specific Integrated Circuit, ASIC), existing It is field programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic device, discrete Door or transistor logic, discrete hardware components.
Wherein, before the current posture information for the camera model that above-mentioned acquisition constructs in advance the step of, the above method Can also include:
Obtain the current posture information of the manipulator model constructed in advance, wherein the camera model and the machinery The end of arm model is fixedly connected;
The step of current posture information for the camera model that above-mentioned acquisition constructs in advance, comprising:
According to the current posture information of the manipulator model, the current posture information of camera model is determined.
Wherein, above-mentioned physical parameter may include: the position of the default mark point of the object model to be marked and described The volume parameter of object model to be marked;
It is above-mentioned according to the inner parameter matrix of the camera model, the physical parameter and the object pose information, The step of determining location of pixels of the object model to be marked in the camera model current acquired image, can wrap It includes:
According to the inner parameter matrix of the camera model and the object pose information, the default mark point is determined First object position of the position in the camera model current acquired image;
According to the first object position and the volume parameter, determine the object model to be marked in the video camera Location of pixels in model current acquired image.
Wherein, above-mentioned default mark point can be the bottom left vertex of the object model to be marked;
It is above-mentioned according to the first object position and the volume parameter, determine that the object model to be marked is taken the photograph described The step of location of pixels in camera model current acquired image, may include:
According to the first object position and the volume parameter, determine that the right vertices of the object model to be marked exist The second target position in the camera model current acquired image;
By using the first object position and second target position as the region in the rectangle frame of angle steel joint, it is determined as Location of pixels of the object model to be marked in the camera model current acquired image.
Wherein, after above-mentioned the step of marking the location of pixels in described image, the above method can also include:
Corresponding to the camera model current acquired image, the location of pixels is recorded, the camera model is worked as The current posture information of preceding posture information and the object model to be marked.
Wherein, the above method can also include:
Change the pose of the camera model and/or the object model to be marked, and returns to the preparatory structure of acquisition The step of current posture information for the camera model built.
The embodiment of the invention also provides a kind of computer readable storage medium, the computer readable storage medium memory Computer program is contained, the computer program performs the steps of when being executed by processor
Obtain the current posture information of the camera model constructed in advance;
Obtain the current posture information and physical parameter of the object model to be marked constructed in advance, wherein the physics ginseng Number is the parameter for identifying the object model size to be marked;
According to the current posture information of the camera model and the current posture information of the object model to be marked, lead to Coordinate transform is crossed, object pose information of the object model to be marked in the camera model coordinate system is obtained;
According to the inner parameter matrix of the camera model, the physical parameter and the object pose information, determine Location of pixels of the object model to be marked in the camera model current acquired image;
The location of pixels is marked in described image.
As it can be seen that when computer program is executed by processor, being obtained first preparatory in scheme provided by the embodiment of the present invention The current posture information of the camera model of building, and the current posture information and object of the object model to be marked that construct in advance It manages parameter and passes through seat then according to the current posture information of camera model and the current posture information of object model to be marked Mark transformation, obtains object pose information of the object model to be marked in camera model coordinate system, further according to inner parameter square Battle array, physical parameter and object pose information, determine pixel of the object model to be marked in camera model current acquired image Position finally marks location of pixels in the picture.It can be under virtual environment, to object to be marked in the image of video camera acquisition Location of pixels be labeled, remove artificial mark work from, and can quickly change the pose of object to be marked, obtain a large amount of marks Image after note, greatly promotion image labeling efficiency.
Wherein, before the current posture information for the camera model that above-mentioned acquisition constructs in advance the step of, the above method Can also include:
Obtain the current posture information of the manipulator model constructed in advance, wherein the camera model and the machinery The end of arm model is fixedly connected;
The step of current posture information for the camera model that above-mentioned acquisition constructs in advance, comprising:
According to the current posture information of the manipulator model, the current posture information of camera model is determined.
Wherein, above-mentioned physical parameter may include: the position of the default mark point of the object model to be marked and described The volume parameter of object model to be marked;
It is above-mentioned according to the inner parameter matrix of the camera model, the physical parameter and the object pose information, The step of determining location of pixels of the object model to be marked in the camera model current acquired image, can wrap It includes:
According to the inner parameter matrix of the camera model and the object pose information, the default mark point is determined First object position of the position in the camera model current acquired image;
According to the first object position and the volume parameter, determine the object model to be marked in the video camera Location of pixels in model current acquired image.
Wherein, above-mentioned default mark point can be the bottom left vertex of the object model to be marked;
It is above-mentioned according to the first object position and the volume parameter, determine that the object model to be marked is taken the photograph described The step of location of pixels in camera model current acquired image, may include:
According to the first object position and the volume parameter, determine that the right vertices of the object model to be marked exist The second target position in the camera model current acquired image;
By using the first object position and second target position as the region in the rectangle frame of angle steel joint, it is determined as Location of pixels of the object model to be marked in the camera model current acquired image.
Wherein, after above-mentioned the step of marking the location of pixels in described image, the above method can also include:
Corresponding to the camera model current acquired image, the location of pixels is recorded, the camera model is worked as The current posture information of preceding posture information and the object model to be marked.
Wherein, the above method can also include:
Change the pose of the camera model and/or the object model to be marked, and returns to the preparatory structure of acquisition The step of current posture information for the camera model built.
It should be noted that for above-mentioned apparatus, electronic equipment and computer readable storage medium embodiment, due to It is substantially similar to embodiment of the method, so being described relatively simple, related place is referring to the part explanation of embodiment of the method It can.
Need further exist for explanation, herein, relational terms such as first and second and the like be used merely to by One entity or operation are distinguished with another entity or operation, without necessarily requiring or implying these entities or operation Between there are any actual relationship or orders.Moreover, the terms "include", "comprise" or its any other variant meaning Covering non-exclusive inclusion, so that the process, method, article or equipment for including a series of elements not only includes that A little elements, but also including other elements that are not explicitly listed, or further include for this process, method, article or The intrinsic element of equipment.In the absence of more restrictions, the element limited by sentence "including a ...", is not arranged Except there is also other identical elements in the process, method, article or apparatus that includes the element.
Each embodiment in this specification is all made of relevant mode and describes, same and similar portion between each embodiment Dividing may refer to each other, and each embodiment focuses on the differences from other embodiments.Especially for system reality For applying example, since it is substantially similar to the method embodiment, so being described relatively simple, related place is referring to embodiment of the method Part explanation.
The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the scope of the present invention.It is all Any modification, equivalent replacement, improvement and so within the spirit and principles in the present invention, are all contained in protection scope of the present invention It is interior.

Claims (10)

1. a kind of mask method of objects in images position, which is characterized in that the described method includes:
Obtain the current posture information of the camera model constructed in advance;
Obtain the current posture information and physical parameter of object model to be marked constructed in advance, wherein the physical parameter is Identify the parameter of the object model size to be marked;
According to the current posture information of the camera model and the current posture information of the object model to be marked, pass through seat Mark transformation, obtains object pose information of the object model to be marked in the camera model coordinate system;
According to the inner parameter matrix of the camera model, the physical parameter and the object pose information, determine described in Location of pixels of the object model to be marked in the camera model current acquired image;
The location of pixels is marked in described image.
2. the method as described in claim 1, which is characterized in that in the present bit of the camera model for obtaining and constructing in advance Before the step of appearance information, the method also includes:
Obtain the current posture information of the manipulator model constructed in advance, wherein the camera model and the mechanical arm mould The end of type is fixedly connected;
The step of current posture information for obtaining the camera model constructed in advance, comprising:
According to the current posture information of the manipulator model, the current posture information of camera model is determined.
3. the method as described in claim 1, which is characterized in that the physical parameter includes: the object model to be marked The position of default mark point and the volume parameter of the object model to be marked;
It is described according to the inner parameter matrix of the camera model, the physical parameter and the object pose information, determine The step of location of pixels of the object model to be marked in the camera model current acquired image, comprising:
According to the inner parameter matrix of the camera model and the object pose information, the position of the default mark point is determined Set the first object position in the camera model current acquired image;
According to the first object position and the volume parameter, determine the object model to be marked in the camera model Location of pixels in current acquired image.
4. method as claimed in claim 3, which is characterized in that the default mark point is a left side for the object model to be marked Lower vertex;
It is described according to the first object position and the volume parameter, determine the object model to be marked in the video camera The step of location of pixels in model current acquired image, comprising:
According to the first object position and the volume parameter, determine the right vertices of the object model to be marked described The second target position in camera model current acquired image;
Using the first object position and second target position as the region in the rectangle frame of angle steel joint, it will be determined as described Location of pixels of the object model to be marked in the camera model current acquired image.
5. the method as described in claim 1, which is characterized in that described the step of marking the location of pixels in described image Later, the method also includes:
Corresponding to the camera model current acquired image, the present bit of the location of pixels, the camera model is recorded The current posture information of appearance information and the object model to be marked.
6. the method according to claim 1 to 5, which is characterized in that the method also includes:
Change the pose of the camera model and/or the object model to be marked, and returns to what the acquisition constructed in advance The step of current posture information of camera model.
7. a kind of annotation equipment of objects in images position, which is characterized in that described device includes:
Camera model data acquisition module, for obtaining the current posture information of the camera model constructed in advance;
Object model data acquisition module to be marked, for obtaining the current posture information of the object model to be marked constructed in advance And physical parameter, wherein the physical parameter is to identify the parameter of the object model size to be marked;
Object pose information determination module, for according to the camera model current posture information and the object to be marked The current posture information of model obtains the object model to be marked in the camera model coordinate system by coordinate transform In object pose information;
Location of pixels determining module, for according to the inner parameter matrix of the camera model, the physical parameter and described Object pose information determines location of pixels of the object model to be marked in the camera model current acquired image;
Location of pixels labeling module, for marking the location of pixels in described image.
8. device as claimed in claim 7, which is characterized in that described device further include:
Manipulator model data acquisition module, for it is described obtain the current posture information of camera model constructed in advance it Before, obtain the current posture information of the manipulator model constructed in advance, wherein the camera model and the manipulator model End be fixedly connected;
The camera model data acquisition module includes:
Current pose information acquisition unit determines camera model for the current posture information according to the manipulator model Current posture information.
9. a kind of electronic equipment, which is characterized in that including processor, memory and communication bus, wherein processor, memory Mutual communication is completed by communication bus;
Memory, for storing computer program;
Processor when for executing the program stored on memory, realizes any method and step of claim 1-6.
10. a kind of computer readable storage medium, which is characterized in that be stored with computer in the computer readable storage medium Program realizes claim 1-6 any method and step when the computer program is executed by processor.
CN201711340685.4A 2017-12-14 2017-12-14 Method and device for marking position of object in image and electronic equipment Active CN109961471B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711340685.4A CN109961471B (en) 2017-12-14 2017-12-14 Method and device for marking position of object in image and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711340685.4A CN109961471B (en) 2017-12-14 2017-12-14 Method and device for marking position of object in image and electronic equipment

Publications (2)

Publication Number Publication Date
CN109961471A true CN109961471A (en) 2019-07-02
CN109961471B CN109961471B (en) 2021-05-28

Family

ID=67018116

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711340685.4A Active CN109961471B (en) 2017-12-14 2017-12-14 Method and device for marking position of object in image and electronic equipment

Country Status (1)

Country Link
CN (1) CN109961471B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111695628A (en) * 2020-06-11 2020-09-22 北京百度网讯科技有限公司 Key point marking method and device, electronic equipment and storage medium
JP2021033568A (en) * 2019-08-22 2021-03-01 ナブテスコ株式会社 Information processing system, information processing method, and construction machine
CN113129365A (en) * 2019-12-30 2021-07-16 初速度(苏州)科技有限公司 Image calibration method and device
CN113378606A (en) * 2020-03-10 2021-09-10 杭州海康威视数字技术股份有限公司 Method, device and system for determining labeling information
CN113763572A (en) * 2021-09-17 2021-12-07 北京京航计算通讯研究所 3D entity labeling method based on AI intelligent recognition and storage medium
CN113763573A (en) * 2021-09-17 2021-12-07 北京京航计算通讯研究所 Three-dimensional object digital marking method and device

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1847789A (en) * 2005-04-06 2006-10-18 佳能株式会社 Method and apparatus for measuring position and orientation
CN101319895A (en) * 2008-07-17 2008-12-10 上海交通大学 Hand-hold traffic accident fast on-site coordinate machine
CN101520904A (en) * 2009-03-24 2009-09-02 上海水晶石信息技术有限公司 Reality augmenting method with real environment estimation and reality augmenting system
CN103606303A (en) * 2013-05-09 2014-02-26 陕西思智通教育科技有限公司 Presentation method and equipment for network teaching
CN103827631A (en) * 2011-09-27 2014-05-28 莱卡地球***公开股份有限公司 Measuring system and method for marking a known target point in a coordinate system
CN104217441A (en) * 2013-08-28 2014-12-17 北京嘉恒中自图像技术有限公司 Mechanical arm positioning fetching method based on machine vision
CN104715479A (en) * 2015-03-06 2015-06-17 上海交通大学 Scene reproduction detection method based on augmented virtuality
WO2016070318A1 (en) * 2014-11-04 2016-05-12 SZ DJI Technology Co., Ltd. Camera calibration
CN105761217A (en) * 2016-01-29 2016-07-13 上海联影医疗科技有限公司 Image reconstruction method and device
US9691163B2 (en) * 2013-01-07 2017-06-27 Wexenergy Innovations Llc System and method of measuring distances related to an object utilizing ancillary objects
CN106910223A (en) * 2016-11-02 2017-06-30 北京信息科技大学 A kind of Robotic Hand-Eye Calibration method based on convex lax global optimization approach

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1847789A (en) * 2005-04-06 2006-10-18 佳能株式会社 Method and apparatus for measuring position and orientation
CN101319895A (en) * 2008-07-17 2008-12-10 上海交通大学 Hand-hold traffic accident fast on-site coordinate machine
CN101520904A (en) * 2009-03-24 2009-09-02 上海水晶石信息技术有限公司 Reality augmenting method with real environment estimation and reality augmenting system
CN103827631A (en) * 2011-09-27 2014-05-28 莱卡地球***公开股份有限公司 Measuring system and method for marking a known target point in a coordinate system
US9691163B2 (en) * 2013-01-07 2017-06-27 Wexenergy Innovations Llc System and method of measuring distances related to an object utilizing ancillary objects
CN103606303A (en) * 2013-05-09 2014-02-26 陕西思智通教育科技有限公司 Presentation method and equipment for network teaching
CN104217441A (en) * 2013-08-28 2014-12-17 北京嘉恒中自图像技术有限公司 Mechanical arm positioning fetching method based on machine vision
WO2016070318A1 (en) * 2014-11-04 2016-05-12 SZ DJI Technology Co., Ltd. Camera calibration
CN104715479A (en) * 2015-03-06 2015-06-17 上海交通大学 Scene reproduction detection method based on augmented virtuality
CN105761217A (en) * 2016-01-29 2016-07-13 上海联影医疗科技有限公司 Image reconstruction method and device
CN106910223A (en) * 2016-11-02 2017-06-30 北京信息科技大学 A kind of Robotic Hand-Eye Calibration method based on convex lax global optimization approach

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
何信华等: "基于单幅二维图像的摄像机标定方法研究", 《计算机科学与应用》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2021033568A (en) * 2019-08-22 2021-03-01 ナブテスコ株式会社 Information processing system, information processing method, and construction machine
JP7383255B2 (en) 2019-08-22 2023-11-20 ナブテスコ株式会社 Information processing systems, information processing methods, construction machinery
CN113129365A (en) * 2019-12-30 2021-07-16 初速度(苏州)科技有限公司 Image calibration method and device
CN113129365B (en) * 2019-12-30 2022-06-24 魔门塔(苏州)科技有限公司 Image calibration method and device
CN113378606A (en) * 2020-03-10 2021-09-10 杭州海康威视数字技术股份有限公司 Method, device and system for determining labeling information
CN111695628A (en) * 2020-06-11 2020-09-22 北京百度网讯科技有限公司 Key point marking method and device, electronic equipment and storage medium
CN111695628B (en) * 2020-06-11 2023-05-05 北京百度网讯科技有限公司 Key point labeling method and device, electronic equipment and storage medium
CN113763572A (en) * 2021-09-17 2021-12-07 北京京航计算通讯研究所 3D entity labeling method based on AI intelligent recognition and storage medium
CN113763573A (en) * 2021-09-17 2021-12-07 北京京航计算通讯研究所 Three-dimensional object digital marking method and device

Also Published As

Publication number Publication date
CN109961471B (en) 2021-05-28

Similar Documents

Publication Publication Date Title
CN109961471A (en) A kind of mask method, device and the electronic equipment of objects in images position
CN112894832B (en) Three-dimensional modeling method, three-dimensional modeling device, electronic equipment and storage medium
CN109584295B (en) Method, device and system for automatically labeling target object in image
CN110298370A (en) Network model training method, device and object pose determine method, apparatus
CN112258567B (en) Visual positioning method and device for object grabbing point, storage medium and electronic equipment
KR102110123B1 (en) Automated frame of reference calibration for augmented reality
CN104574267B (en) Bootstrap technique and information processing equipment
CN110176032B (en) Three-dimensional reconstruction method and device
CN109559371B (en) Method and device for three-dimensional reconstruction
CN104781849A (en) Fast initialization for monocular visual simultaneous localization and mapping (SLAM)
CN104715479A (en) Scene reproduction detection method based on augmented virtuality
CN112686877B (en) Binocular camera-based three-dimensional house damage model construction and measurement method and system
CN109035327B (en) Panoramic camera attitude estimation method based on deep learning
CN110111388A (en) Three-dimension object pose parameter estimation method and visual apparatus
CN109444146A (en) A kind of defect inspection method, device and the equipment of industrial processes product
CN109559341A (en) A kind of generation method and device of mechanical arm fetching
WO2021004416A1 (en) Method and apparatus for establishing beacon map on basis of visual beacons
CN104318604A (en) 3D image stitching method and apparatus
US11373329B2 (en) Method of generating 3-dimensional model data
US11631195B2 (en) Indoor positioning system and indoor positioning method
CN110293553A (en) Control the method, apparatus and model training method, device of robotic arm manipulation object
CN113763478A (en) Unmanned vehicle camera calibration method, device, equipment, storage medium and system
CN110298877A (en) A kind of the determination method, apparatus and electronic equipment of object dimensional pose
CN109858319A (en) Image processing equipment and control method and non-transitory computer-readable storage media
CN112381873A (en) Data labeling method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant