Jui-Fa Chen  1 , Wei-Chuan Lin2 , Kun-Hsiao Tsai1 and Shih-Yao Dai3

1Department of Information Engineering, Tamkang University, Tamsui, Taiwan 251, R.O.C.
2Department of Information Technology, Takming College, Taipei, Tauwan 114, R.O.C.
3Information Security Technology Center, Networks & Multimedia Institute, Institute for Information Industry, Taipei, Taiwan 106, R.O.C.


 

Received: December 13, 2010
Accepted: January 6, 2011
Publication Date: September 1, 2011

Download Citation: ||https://doi.org/10.6180/jase.2011.14.3.09  


ABSTRACT


In the past, in the domain of human movement, athletes, dancers and rehabilitation relied heavily on experts to judge whether or not learners’ movements were correct, and offered suggestions for improvement. By way of modern science and technology, analyses via sensors can determine the motion of each body region. Our research is based on the Laban Movement Analysis (LMA) of dance, where data from several sensors are analyzed. According to the aforementioned sensor data, we can examine the efforts made by learners’ movements. In this paper, we use LMA sudden and sustained efforts to analyze these movements and construct a guiding language system in accordance with expert guiding language already available. This system will provide suggestions for students even in the absence of expert advice. In this way, learners can learn in such a self-taught way.


Keywords: Human Movement, Effort, Laban Movement Analysis (LMA), E-Learning, Sensor


REFERENCES


  1. [1] Masuda, M. and Kato, S., “Motion Rendering System for Emotion Expression of Human Form Robots Based on Laban Movement Analysis,” RO-MAN, 2010 IEEE, Viareggio, p. 324 (2010).
  2. [2] Kozaburo Hachimura, Katsumi Takashina and Mitsu Yoshimura, “Analysis and Evaluation of Dancing Movement Based on LMA,” 2005 IEEE International Workshop on Robots and Human Interactive Communication, Graduate School of Science and Engineering, Ritsumeikan University, Center for Promotion of the COE, Ritsumeikan University, pp. 294299 (2005).
  3. [3] Rett, J., Santos, L. and Dias, J., “Laban Movement Analysis for Multi-Ocular Systems,” Intelligent Robots and Systems, 2008. IROS 2008. IEEE/RSJ International Conference, Nice, p. 761 (2008).
  4. [4] Hsu, S.-H., “Establishing of Laban Effort Analysis and Diagnosis Model,” Master Thesis, National Taipei University (2008).
  5. [5] Huang, Y.-T., “A Quantitative Study of Time, Space, and Weight Elements in Laban Movement Analysis: Using Action Drives as a Tool,” Master Thesis, National College of Physical Education and Sports Taiwan Republic of China, Taiwan (2006).
  6. [6] Lo, M.-K., “Applying Data Mining Technique to Effort of Body Movement Analysis  Based on LMA,” Master Thesis, National Taipei University (2008).
  7. [7] Norman Badler, Monica Costa, Liwei Zhao and Diane Chi, “To Gesture or Not to Gesture: What is the Question?” Computer Graphics International, 2000, Proceedings, Center for Human Modeling and Simulation University of Pennsylvania, pp. 39 (2000).
  8. [8] Jörg Rett and Jorge Dias, “Human-Robot Interface with Anticipatory Characteristics Based on Laban Movement Analysis and Bayesian Models,” Proceedings of the 2007 IEEE 10th International Conference on Rehabilitation Robotics, June 12-15, Institute of Systems and Robotics, University of Coimbra, pp. 257268 (2007).
  9. [9] Douglas, D. and Peucker, T., “Algorithms for the Reduction of the Number of Points Required to Represent a Digitized Line or its Caricature,” The Canadian Cartographer, Vol. 10, pp. 112122 (1973).
  10. [10] Mark Harrower and Matt Bloch, “MapShaper.org: A Map Generalization Web Service,” Proceedings of AUTOCARTO 2006, University of Wisconsin-Madison, Vol. 26, pp. 2227 (2006).
  11. [11] Wolf, E. B. and Howe, K., “Web-Client Based Distributed Generalization and Geoprocessing,” Advanced Geographic Information Systems & Web Services, 2009. GEOWS ‘09. International Conference, Cancun, p. 123 (2009).