Video analysis boosts healthcare efficiency and safety

Video-pattern-recognition programs enable automated patient monitoring for diagnostic and safety purposes, thus reducing staffing requirements in hospitals and nursing homes.
10 February 2011
Pau-Choo Chung, Yung-Ming Kuo, Chin-De Liu and Chun-Rong Huang

It is an inescapable truth: as the median age of the citizens in many countries continues to rise, so will the number of patients in hospitals and long-term-care facilities. If the population of patients grows faster than that of caregivers, automation of some of the caregivers' duties will become increasingly important. Therefore, in the Smart Media and Intelligent Living Excellence (SMILE) Laboratory we have focused on applying automated video and physiological-signal analysis for healthcare applications. Specifically, we have developed systems for respiration and human-behavior analysis with dangerous-event detection.

Respiration behavior can be critical in diagnosing patient illness or recognizing distress during sleep. Many diseases—such as obstructive sleep-apnea syndrome, cardiovascular disease, and stroke—induce abnormal respiration. Automated respiration monitoring is conventionally performed with demand regulators or pneumatic chest bands, which can be uncomfortable and change the respiration behavior they are meant to monitor. Instead, we use near-IR images, captured by a near-IR camera (see Figure 1), to measure the sleeper's respiration1 based on the periodic rising and falling motion of their chest or abdomen.2


Figure 1. Setting of the near-IR camera for respiration detection. (© 2010 IEEE.)

To make the system work well, some problems must be considered. Respiration-induced movements are tiny, but large movements occur as subjects move their bodies, consciously or unconsciously, during sleep. Therefore, it is necessary to dynamically locate the new position of the respiration region and determine the representative respiratory-motion direction for analysis. In addition, the system must be able to measure the respiration. Figure 2 shows the procedure of the measurement system, which includes three subsystems.


Figure 2. Procedure for respiration detection from near-IR images. Seq.: Sequence. RR: Respiration region. (© 2010 IEEE.)

First, the body-motion-context detection subsystem distinguishes between respiratory and nonrespiratory body movement. To optimally segment the foreground and detect body motion, we propose a context-based background subtraction with an adaptive-time-constant scheme based on Gaussian-mixture models. This scheme can accurately recognize the status of the sleeping subject's body—moving or motionless—and quickly detect transitions from one status to the other. To dynamically determine the working parameters based on the extracted motion information, we developed a finite-state-controlled hidden-Markov model (HMM).

Second, the respiration-context detection subsystem dynamically detects the representative direction of the sleeper's respiration motion and locates the respiratory region in the image. This second subsystem searches for a pair of representative video frames, i.e., the two frames with the maximum motion difference of the sleeper's respiration, derived using the optical-flow-estimation method. The mean-shift algorithm is then used to locate the respiratory region.

Third, the motion-vector-based real-time respiration-measurement subsystem speeds up the system's performance by detecting the feature zones in the respiratory region. It then uses an improved fast-optical-flow method to obtain the respiratory curve for analysis of the respiration frequency, depth, and the time interval between two respirations. Together, the three subsystems can identify abnormal respiration patterns and automatically alert caregivers for diagnosis or intervention.

Our second line of research into automated healthcare monitoring involves detecting abnormal, potentially dangerous behavior using real-time-video patient monitoring, again using HMM techniques. As an abnormality represents a pattern that is peculiar to normal understanding and different from daily routine, a system with scenario-based understanding of human behaviors3,4 needs to be established.

Human behavior can be monitored by looking at three components, including the surrounding environment, human activities, and temporal information. We recognized that similar activities may represent different behaviors in different contexts, and developed a behavior-understanding system, hierarchical-context HMM (HC-HMM). The system recognizes and evaluates behaviors according to activities occurring in the surrounding contexts. It applies a negation-selection mechanism which, based on posture transitions with the temporal-duration reasoning, extracts basic activities from sequences of video captured by cameras installed in a healthcare center as a surveillance system for monitoring patient safety

In daily life, it is common to see multiple concurrent, independent actions and interactions. To address this issue, we also developed an interaction-embedded HMM (IE-HMM) framework capable of recognizing both individual actions and multiple independent group interactions within the same scene.5 The IE-HMM framework comprises switch-control, individual-duration HMM, and interaction-coupled-duration HMM modules. In tests, the system worked with video sequences captured over a period of one month between the hours of 8am to 6pm in a healthcare center. We focused on recognizing wheelchairs and normal and abnormal behaviors among their users, reasoning that wheelchair detection provides healthcare systems with an automatic way to monitor people in their care.

In view of this need, we developed a cascaded decision tree, combined with invariant local descriptors to represent the shape and appearance variations of wheelchairs to detect the latter in cluttered environments6,7 and identify the direction of their motion. Each node of the tree contains a boosted-cascade classifier to detect regions with wheelchairs and reject nonassociated image regions (see Figure 3). Figure 4 shows the detection results, where detected wheelchairs are marked by red rectangles.


Figure 3. Cascade decision-tree structure. (© 2010 IEEE.)

Figure 4. Wheelchair-detection results. (© 2010 IEEE.)

In the future, we hope to further develop such systems to provide automatic responses to object or event detection. For example, application-specific services could be automatically brought to disabled people when their wheelchairs are detected. Detecting abnormal or dangerous behaviors, such as wheelchair users attempting to leave their chairs to get into bed without assistance—a strictly prohibited behavior—could avert possible patient falls and help improve the quality of care.


Pau-Choo Chung
National Cheng Kung University (NCKU)
Tainan, Taiwan

Pau-Choo (Julia) Chung received BS and MS degrees in electrical engineering from NCKU in 1981 and 1983, respectively, and a PhD degree in electrical engineering from Texas Tech University at Lubbock in 1991. She is currently Distinguished Professor and an IEEE Fellow.

Yung-Ming Kuo
Himax Imaging Inc.
Jhubei City, Taiwan

Yung-Ming Kuo received his PhD degree from NCKU's Department of Electrical Engineering in 2010. His current research interests include image processing, pattern recognition, video-based behavior analysis, and healthcare. He is now a senior engineer.

Chin-De Liu
Industrial Technology Research Institute
Hsinchu, Taiwan

Chin-De Liu received his MS and PhD degrees in electrical engineering from NCKU in 2000 and 2008, respectively. He is a researcher in the Information and Communications Research Laboratories.

Chun-Rong Huang
National Chung-Hsing University
Taichung, Taiwan

Chun-Rong Huang received BS and PhD degrees in electrical engineering from NCKU in 1999 and 2005, respectively. Since 2010, he has been an assistant professor with both the Institute of Networking and Multimedia and the Department of Computer Science and Engineering.


References:
1. Jiann-Shu Lee, Yung-Ming Kuo, Pau-Choo Chung, A visual context awareness based sleeping respiration measurement system, IEEE Trans. Inf. Technol. Biomed. 14, no. 2 pp. 255-265, 2010.
2. Jezekiel Ben-Arie, Zhiqian Wang, Purvin Pandit, Shyamsundar Rajaram, Human activity recognition using multidimensional indexing, IEEE Trans. Patt. Anal. Machine Intell. 24, no. 8pp. 1091-1104, 2002.
3. Pau-Choo Chung, Chin-De Liu, A daily behavior enabled hidden Markov model for human behavior understanding, Patt. Recognit. 41, no. 5 pp. 1572-1580, 2008.
4. Chin-De Liu, Pau-Choo Chung, Yi-Nung Chung, Monique Thonnat, Understanding of human behaviors from videos in nursing care monitoring systems, J. High Speed Netw. 16, pp. 91-103, 2007.
5. Chin-De Liu, Yi-Nung Chung, Pau-Choo Chung, An interaction-embedded HMM framework for human behavior understanding: with nursing environments as examples, IEEE Trans. Inf. Technol. Biomed. 14, no. 5 pp. 1236-1246, 2010.
6. Chun-Rong Huang, Pau-Choo (Julia) Chung, Kuo-Wei Lin, Sheng-Chieh Tseng, Wheelchair detection using cascaded decision tree, IEEE Trans. Inf. Technol. Biomed. 14, no. 2 pp. 292-300, 2010.
7. Chin-Ann Yang, Pau-Choo Chung, Recovery of 3-D location and orientation of a wheelchair in a calibrated environment by using single perspective geometry, Proc. IEEE Region 10 Conf. (TENCON), pp. 1-4, 2007. doi:10.1109/TENCON.2007.4429011