CN110598521A - Behavior and physiological state identification method based on intelligent analysis of face image - Google Patents

Behavior and physiological state identification method based on intelligent analysis of face image Download PDF

Info

Publication number
CN110598521A
CN110598521A CN201910638461.4A CN201910638461A CN110598521A CN 110598521 A CN110598521 A CN 110598521A CN 201910638461 A CN201910638461 A CN 201910638461A CN 110598521 A CN110598521 A CN 110598521A
Authority
CN
China
Prior art keywords
face
face image
area
behavior
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910638461.4A
Other languages
Chinese (zh)
Inventor
张应宪
张京
曾维军
王杭先
王永刚
王恒
程清明
朱玲利
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Keyixing Information Technology Co ltd
Original Assignee
Nanjing Fiat Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Fiat Intelligent Technology Co Ltd filed Critical Nanjing Fiat Intelligent Technology Co Ltd
Priority to CN201910638461.4A priority Critical patent/CN110598521A/en
Publication of CN110598521A publication Critical patent/CN110598521A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a behavior and physiological state identification method based on intelligent analysis of a face image, which comprises the following steps: (1) establishing a face detection and tracking model; (2) acquiring a face image of a current human target, and preprocessing the face image; (3) accurately positioning the positions of the areas of eyes, mouths and ears in the face image; (4) processing the face image by using face segmentation, skin color detection and image edge extraction methods; (5) it is determined whether the current human target is drowsy and whether smoking and phone behavior is present. The invention is a high-efficiency method which can detect the current human target physiological state and behaviors in a low-power-consumption embedded device scene, the detection of the physiological type comprises whether fatigue occurs, and the detected behaviors comprise whether smoking and making a call. The invention can be applied to the industrial application fields of intelligent auxiliary driving, military/police simulation training personnel behavior analysis, military/police individual intelligent wearable equipment and the like.

Description

Behavior and physiological state identification method based on intelligent analysis of face image
Technical Field
The invention relates to the field of image recognition and deep learning, in particular to an emotion and physiological state recognition method based on intelligent analysis of a face image.
Background
According to investigation, the behavior and the physiological state of the driver are positively correlated with the attention of the driver, such as a call-making behavior during driving, the attention of the driver is seriously dispersed, and the probability of accidents is improved by at least four times compared with the normal situation. Therefore, the behavior and physiological state of the driver are recognized, the method is important for safe driving, the problem of attention of vehicle monitoring departments and users can be solved, and the method has very important application value.
The face detector based on the HOG (Histogram of Oriented gradients) can obtain a good processing speed on a processor of an X86 system, but cannot meet the requirements of commercialization on a platform with limited memory and computing resources, and is more difficult to meet performance indexes in a scene in which various physiological states need to be processed.
Disclosure of Invention
The purpose of the invention is as follows: the invention aims to provide an efficient method for detecting various human physiological states and behaviors in a low-power-consumption embedded device scene, wherein the detected physiological states comprise fatigue or not, and the detected behaviors comprise smoking or calling or not.
In order to achieve the purpose, the invention adopts the following technical scheme: 1. a behavior and physiological state recognition method based on intelligent analysis of face images is characterized by comprising the following steps:
(1) establishing a face detection and tracking model;
(2) acquiring a face image of a current human target, and preprocessing the face image;
(3) accurately positioning the positions of the areas of eyes, mouths and ears in the face image;
(4) processing the face image by using face segmentation, skin color detection and image edge extraction methods;
(5) it is determined whether the current human target is drowsy and whether smoking and phone behavior is present.
In the step (1), a face detection and tracking model is established through a DLIB library.
In the step (2), the face image of the current human target is obtained through the camera, and the preprocessing comprises the steps of reducing background noise and removing illumination influence on the face image, so that the image quality is optimized.
In step (3), the positions of the areas of eyes, mouth and ears are accurately positioned in the face image, and the process is as follows:
(31) according to the human face detection and tracking model obtained in the step (1), 68 feature points of the face are marked in the human face image, and the regional position information of the mouth, eyes and nose is further obtained through the feature points of the face, wherein the regional position information (P) of the nose is obtainedx,Py) Namely the coordinates of the center position of the face,
(32) according to the face proportion, the position of the ear region is positioned, and the method is specifically realized by adopting the following formula:
wherein (P)x,Py) Face center position coordinates of the current human target, FwAnd FhWidth and height of the face region of the current human target, respectively; (ER)x,ERy) Is the right ear position region of the current human targetReference coordinate of (2), ERhHeight of the right ear region, ERwWidth of the right ear region; (EL)x,ELy) As reference coordinates of the left ear position area of the current human target, ELhHeight of left ear region, ELwWidth of left ear region.
In the step (4), the face image is processed by using the face segmentation, skin color detection and image edge extraction methods, and the specific steps are as follows:
(41) carrying out fuzzy processing on the face image by using an image filtering method;
(42) carrying out enhancement processing on the face image by using an image enhancement technology;
(43) obtaining a local gray image of the mouth part by using the position information of the mouth part region in the step (3), and removing a communicating region with a smaller area;
(44) carrying out noise elimination and gradient amplitude and direction calculation on the image edge obtained in the step (43) by using a Canny edge detection operator to obtain an edge line of the face image;
(45) the method comprises the steps of utilizing a Gauss skin color model of a YCbCr color space to segment skin color areas, calculating the area ratio of skin color pixel points of an ear area to the whole ear area, and specifically calculating through the following formula:
wherein beta is the ratio of the area occupied by the skin color pixel points in the ear region to the area of the whole ear region, epsiloniThe positions of skin color pixel points in the ear region are represented, m is the number of the positions of the skin color pixel points in the ear region, and epsilon represents the total area of the ear region.
In step (5), the step of judging whether the current human target is sleepy and whether smoking and calling behaviors exist is as follows:
(51) according to the position information of the mouth area and the binocular area obtained in the step (3), if the height-width ratio of the binocular area is continuously smaller than the specified threshold value of the height-width ratio of the binocular area and the height-width ratio of the mouth area is continuously larger than the specified threshold value of the height-width ratio of the mouth area within the specified detection time, judging that the current human target is in a fatigue state;
(52) judging whether the current human target has smoking behavior according to the edge lines of the face image obtained in the step (44), and specifically adopting the following formula:
if L (i) is 1, the current target has smoking behavior, if L (i) is 0, the current target has no smoking behavior, where theta (i) is the inclination angle of the edge line,a threshold value is specified for the edge line inclination angle;
(53) and (5) according to the area ratio of the skin color pixel points in the ear area to the whole ear area obtained in the step (45), if the area ratio is larger than the specified threshold value of the skin color pixel points in the ear area, judging that the current human target has a call making behavior.
In order to reduce the false judgment probability in the step (5), in the step (2), a camera is used for continuously acquiring multi-frame dynamic human face images of the current human target, and the decision is made according to the average value of the processing results of the multi-frame dynamic human face images.
Has the advantages that: the invention provides a method which is efficient and can meet the requirement of detecting the current human target physiological state and behaviors in a low-power-consumption embedded device scene, the detection of the physiological type comprises whether fatigue occurs, and the detected behaviors comprise whether smoking or calling; the invention can be applied to the industrial application fields of intelligent auxiliary driving, military/police simulation training personnel behavior analysis, military/police individual intelligent wearable equipment and the like.
Drawings
FIG. 1 is a flow chart of a behavior and physiological state recognition method based on intelligent analysis of human face images according to the present invention;
FIG. 2 is a flow chart for determining whether a current human target is drowsy.
The specific implementation mode is as follows:
the invention is further explained below with reference to the drawings.
As shown in fig. 1, the method for recognizing behavior and physiological state based on intelligent analysis of human face image of the present invention includes the following steps:
(1) establishing a face detection and tracking model through a DLIB library;
(2) acquiring a face image of a current human target through a camera, and preprocessing the face image, wherein the preprocessing comprises the steps of reducing background noise and removing illumination influence on the face image, and optimizing the image quality;
(3) the method comprises the following steps of accurately positioning the positions of the areas of eyes, mouths and ears in a face image, wherein the flow is as follows:
(31) according to the human face detection and tracking model in the step (1), 68 feature points of the face are marked in the human face image, and the regional position information of the mouth, eyes and nose is further obtained through the feature points of the face, wherein the regional position information (P) of the nose isx,Py) Namely the coordinates of the center position of the human face;
(32) obtaining regional position information of the nose according to the face detection and tracking model in the step (1); (33) according to the face proportion, the position of the ear region is positioned, and the method is specifically realized by adopting the following formula:
wherein (P)x,Py) Face center position coordinates of the current human target, FwAnd FhWidth and height of the face region of the current human target, respectively; (ER)x,ERy) Is a reference coordinate, ER, of a right ear position region of a current human targethHeight of the right ear region, ERwWidth of right ear regionDegree; (EL)x,ELy) As reference coordinates of the left ear position area of the current human target, ELhHeight of left ear region, ELwWidth of the left ear region;
(4) the method for processing the face image by using the face segmentation, the skin color detection and the image edge extraction comprises the following specific steps:
(41) carrying out fuzzy processing on the face image by using an image filtering method;
(42) carrying out enhancement processing on the face image by using an image enhancement technology;
(43) removing the communication area in the oral area by using the position information of the oral area in the step (3);
(44) obtaining edge lines of the face image by using image Hough transformation and a Canny edge detection operator;
(45) the method comprises the steps of utilizing a Gauss skin color model of a YCbCr color space to segment skin color areas, calculating the area ratio of skin color pixel points of an ear area to the whole ear area, and specifically calculating through the following formula:
wherein beta is the ratio of the area occupied by the skin color pixel points in the ear region to the area of the whole ear region, epsiloniRepresenting the positions of skin color pixel points in the ear region, wherein m is the number of the positions of the skin color pixel points in the ear region, and epsilon represents the total area of the ear region;
(5) judging whether the current human target is sleepy and whether smoking and calling behaviors exist or not, and comprising the following steps of:
(51) according to the position information of the mouth and the eyes area obtained in the step (3), as shown in fig. 2, if the height-width ratio (closing rate) of the eyes area is continuously smaller than the specified threshold value of the height-width ratio of the eyes area and the height-width ratio (opening rate) of the mouth area is continuously larger than the specified threshold value of the height-width ratio of the mouth area within the specified detection time, judging that the current human target is in a fatigue state;
(52) judging whether the current human target has smoking behavior according to the edge lines of the face image obtained in the step (44), and specifically adopting the following formula:
if L (i) is 1, the current target has smoking behavior, if L (i) is 0, the current target has no smoking behavior, where theta (i) is the inclination angle of the edge line,a threshold value is specified for the edge line inclination angle;
(53) and (5) according to the area ratio of the skin color pixel points in the ear region obtained in the step (45) to the area ratio of the whole ear region, if the area ratio is larger than the specified threshold of the skin color pixel points in the ear region, judging that the current human target is in a fatigue state, and judging that the current human target has a call-making behavior.
In order to reduce the false judgment probability in the step (5), in the step (2), a camera is used for continuously acquiring multi-frame dynamic human face images of the current human target, and the decision is made according to the average value of the processing results of the multi-frame dynamic human face images.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (7)

1. A behavior and physiological state recognition method based on intelligent analysis of face images is characterized by comprising the following steps:
(1) establishing a face detection and tracking model;
(2) acquiring a face image of a current human target, and preprocessing the face image;
(3) accurately positioning the positions of the areas of eyes, mouths and ears in the face image;
(4) processing the face image by using face segmentation, skin color detection and image edge extraction methods;
(5) it is determined whether the current human target is drowsy and whether smoking and phone behavior is present.
2. The behavior and physiological state recognition method based on intelligent analysis of the face image as claimed in claim 1, wherein: in the step (1), a face detection and tracking model is established through a DLIB library.
3. The behavior and physiological state recognition method based on intelligent analysis of the face image as claimed in claim 1, wherein: in the step (2), the face image of the current human target is obtained through the camera, and the preprocessing comprises the steps of reducing background noise and removing illumination influence on the face image, so that the image quality is optimized.
4. The behavior and physiological state recognition method based on intelligent analysis of the face image as claimed in claim 1, wherein: in step (3), the positions of the areas of eyes, mouth and ears are accurately positioned in the face image, and the process is as follows:
(31) according to the human face detection and tracking model obtained in the step (1), 68 feature points of the face are marked in the human face image, and the regional position information of the mouth, eyes and nose is further obtained through the feature points of the face, wherein the regional position information (P) of the nose is obtainedx,Py) Namely the coordinates of the center position of the face,
(32) according to the face proportion, the position of the ear region is positioned, and the method is specifically realized by adopting the following formula:
wherein (P)x,Py) Face center position coordinates of the current human target, FwAnd FhWidth and height of the face region of the current human target, respectively; (ER)x,ERy) Is a reference coordinate, ER, of a right ear position region of a current human targethHeight of the right ear region, ERwWidth of the right ear region; (EL)x,ELy) As reference coordinates of the left ear position area of the current human target, ELhHeight of left ear region, ELwWidth of left ear region.
5. The behavior and physiological state recognition method based on intelligent analysis of the face image as claimed in claim 1, wherein: in the step (4), the face image is processed by using the face segmentation, skin color detection and image edge extraction methods, and the specific steps are as follows:
(41) carrying out fuzzy processing on the face image by using an image filtering method;
(42) carrying out enhancement processing on the face image by using an image enhancement technology;
(43) obtaining a local gray image of the mouth part by using the position information of the mouth part region in the step (3), and removing a communicating region with a smaller area;
(44) carrying out noise elimination and gradient amplitude and direction calculation on the image edge obtained in the step (43) by using a Canny edge detection operator to obtain an edge line of the face image;
(45) the method comprises the steps of utilizing a Gauss skin color model of a YCbCr color space to segment skin color areas, calculating the area ratio of skin color pixel points of an ear area to the whole ear area, and specifically calculating through the following formula:
wherein beta is the ratio of the area occupied by the skin color pixel points in the ear region to the area of the whole ear region, epsiloniThe positions of pixels containing skin color in the ear region are represented, and m is the skin color in the ear regionThe number of pixel point locations, ε, represents the total area of the ear region.
6. The behavior and physiological state recognition method based on intelligent analysis of the face image as claimed in claim 5, wherein: in step (5), the step of judging whether the current human target is sleepy and whether smoking and calling behaviors exist is as follows:
(51) according to the position information of the mouth area and the binocular area obtained in the step (3), if the height-width ratio of the binocular area is continuously smaller than the specified threshold value of the height-width ratio of the binocular area and the height-width ratio of the mouth area is continuously larger than the specified threshold value of the height-width ratio of the mouth area within the specified detection time, judging that the current human target is in a fatigue state;
(52) judging whether the current human target has smoking behavior according to the edge lines of the face image obtained in the step (44), and specifically adopting the following formula:
if L (i) is 1, the current target has smoking behavior, if L (i) is 0, the current target has no smoking behavior, where theta (i) is the inclination angle of the edge line,a threshold value is specified for the edge line inclination angle;
(53) and (5) according to the area ratio of the skin color pixel points in the ear area to the whole ear area obtained in the step (45), if the area ratio is larger than the specified threshold value of the skin color pixel points in the ear area, judging that the current human target has a call making behavior.
7. The behavior and physiological state recognition method based on intelligent analysis of the face image as claimed in claim 1, wherein: in order to reduce the false judgment probability in the step (5), in the step (2), a camera is used for continuously acquiring multi-frame dynamic human face images of the current human target, and the decision is made according to the average value of the processing results of the multi-frame dynamic human face images.
CN201910638461.4A 2019-07-16 2019-07-16 Behavior and physiological state identification method based on intelligent analysis of face image Pending CN110598521A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910638461.4A CN110598521A (en) 2019-07-16 2019-07-16 Behavior and physiological state identification method based on intelligent analysis of face image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910638461.4A CN110598521A (en) 2019-07-16 2019-07-16 Behavior and physiological state identification method based on intelligent analysis of face image

Publications (1)

Publication Number Publication Date
CN110598521A true CN110598521A (en) 2019-12-20

Family

ID=68852857

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910638461.4A Pending CN110598521A (en) 2019-07-16 2019-07-16 Behavior and physiological state identification method based on intelligent analysis of face image

Country Status (1)

Country Link
CN (1) CN110598521A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111400687A (en) * 2020-03-09 2020-07-10 京东数字科技控股有限公司 Authentication method and device and robot
CN112071006A (en) * 2020-09-11 2020-12-11 湖北德强电子科技有限公司 High-efficiency low-resolution image area intrusion recognition algorithm and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101032405A (en) * 2007-03-21 2007-09-12 汤一平 Safe driving auxiliary device based on omnidirectional computer vision
CN103279750A (en) * 2013-06-14 2013-09-04 清华大学 Detecting method of mobile telephone holding behavior of driver based on skin color range
CN108509902A (en) * 2018-03-30 2018-09-07 湖北文理学院 A kind of hand-held telephone relation behavioral value method during driver drives vehicle
CN109155106A (en) * 2016-06-02 2019-01-04 欧姆龙株式会社 Condition estimating device, condition estimation method and condition estimating program

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101032405A (en) * 2007-03-21 2007-09-12 汤一平 Safe driving auxiliary device based on omnidirectional computer vision
CN103279750A (en) * 2013-06-14 2013-09-04 清华大学 Detecting method of mobile telephone holding behavior of driver based on skin color range
CN109155106A (en) * 2016-06-02 2019-01-04 欧姆龙株式会社 Condition estimating device, condition estimation method and condition estimating program
CN108509902A (en) * 2018-03-30 2018-09-07 湖北文理学院 A kind of hand-held telephone relation behavioral value method during driver drives vehicle

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
魏民国: "基于机器视觉的驾驶人使用手持电话行为检测方法", 中国优秀博硕士学位论文全文数据库(硕士)工程科技Ⅱ辑, pages 2 - 51 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111400687A (en) * 2020-03-09 2020-07-10 京东数字科技控股有限公司 Authentication method and device and robot
CN111400687B (en) * 2020-03-09 2024-02-09 京东科技控股股份有限公司 Authentication method, authentication device and robot
CN112071006A (en) * 2020-09-11 2020-12-11 湖北德强电子科技有限公司 High-efficiency low-resolution image area intrusion recognition algorithm and device

Similar Documents

Publication Publication Date Title
CN109918971B (en) Method and device for detecting number of people in monitoring video
US7916904B2 (en) Face region detecting device, method, and computer readable recording medium
CN106980852B (en) Based on Corner Detection and the medicine identifying system matched and its recognition methods
US8295593B2 (en) Method of detecting red-eye objects in digital images using color, structural, and geometric characteristics
CN111191573A (en) Driver fatigue detection method based on blink rule recognition
CN111553214B (en) Method and system for detecting smoking behavior of driver
CN110728241A (en) Driver fatigue detection method based on deep learning multi-feature fusion
CN104751142A (en) Natural scene text detection algorithm based on stroke features
CN110059634B (en) Large-scene face snapshot method
CN107590440A (en) The method and system of Human detection under a kind of Intelligent household scene
CN106881716A (en) Human body follower method and system based on 3D cameras robot
CN103218615B (en) Face judgment method
CN109543518A (en) A kind of human face precise recognition method based on integral projection
CN112101260B (en) Method, device, equipment and storage medium for identifying safety belt of operator
CN110598521A (en) Behavior and physiological state identification method based on intelligent analysis of face image
CN108446642A (en) A kind of Distributive System of Face Recognition
WO2019088333A1 (en) Method for recognizing human body activity on basis of depth map information and apparatus therefor
CN110334631B (en) Sitting posture detection method based on face detection and binary operation
CN110222647B (en) Face in-vivo detection method based on convolutional neural network
Yan et al. The recognition of traffic speed limit sign in hazy weather
CN108334870A (en) The remote monitoring system of AR device data server states
Zhang et al. Hand gesture detection and segmentation based on difference background image with complex background
CN117475353A (en) Video-based abnormal smoke identification method and system
CN111046866B (en) Method for detecting RMB crown word number region by combining CTPN and SVM
CN108446639A (en) Low-power consumption augmented reality equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Zhang Yingxian

Inventor after: Zhang Jing

Inventor after: Wang Hangxian

Inventor after: Zhu Lingli

Inventor before: Zhang Yingxian

Inventor before: Zhang Jing

Inventor before: Zeng Weijun

Inventor before: Wang Hangxian

Inventor before: Wang Yonggang

Inventor before: Wang Heng

Inventor before: Cheng Qingming

Inventor before: Zhu Lingli

CB03 Change of inventor or designer information
TA01 Transfer of patent application right

Effective date of registration: 20240402

Address after: 210000, Building 01, Building A, 4th Floor, Block B, Nanjing International Service Outsourcing Building, No. 301 Hanzhongmen Street, Gulou District, Nanjing City, Jiangsu Province

Applicant after: Nanjing Keyixing Information Technology Co.,Ltd.

Country or region after: China

Address before: Room 504, Building A, Dongshan International Enterprise Headquarters Park, No. 33 Dongqi Road, Jiangning District, Nanjing City, Jiangsu Province, 210000

Applicant before: NANJING FEIAITE INTELLIGENT TECHNOLOGY Co.,Ltd.

Country or region before: China

TA01 Transfer of patent application right