CONSTRUCTION OF A SOMATOSENSORY INTERACTIVE SYSTEM BASED ON COMPUTER VISION AND AUGMENTED REALITY TECHNIQUES USING THE KINECT DEVICE FOR FOOD AND AGRICULTURAL EDUCATION
A somatosensory interactive system based on computer vision and augmented reality (AR) techniques using the Kinect device is proposed, on which a game of harvesting three kinds of fruit can be played for food and agricultural education. The Kinect is used to capture users’ motion images, the Unity3D is used as the game engine, and the Kinect SDK is used for developing programs, to implement the tasks of face detection and tracking, hand-gesture recognition, and body-model matching and tracking involved in fruit-harvesting activities. AR-based photos of the harvest result can be taken and downloaded as souvenirs. The system was exhibited and observations of the users’ performances as well as interviews with experts and the users were conducted. The collected opinions were used to evaluate the effectiveness of the system, reaching the following conclusions: 1) the interactive experience of using this system is simple and intuitive; 2) the use of body movements for man-machine interaction is given positive reviews; 3) the introduction of somatosensory interactive education can arouse participants’ interest, achieving the effect of edutainment; and 4) the experience of taking commemorative photos can achieve the publicity and promotion effect of food and agricultural education through sharing on social media.
Food and Agriculture Organization of the United Nations (2014). Biodiversity for Food and Agriculture: Contributing to Food Security and Sustainability in a Changing World. http://www.fao.org/family-farming/detail/en/c/284748/. Accessed 23 June 2020.
Boccaletti, S. Environmentally responsible food choice. OECD Journal: General Papers. 2008;2:117152. DOI: https://doi.org/10.1787/gen_papers-v2008-art13-en
Tscharntke T, Clough Y, Wanger TC, Jackson L, Motzke I, Perfecto I, Whitbread A. Global food security, biodiversity conservation and the future of agricultural intensification. Biological Conservation. 2012;151:53-59. DOI: https://doi.org/10.1016/j.biocon.2012.01.068
Konuma H. Status of world food security and its future outlook, and role of agricultural research and education. Journal of Developments in Sustainable Agriculture. 2016;10:69-75.
Kimura AH. Food education as food literacy: privatized and gendered food knowledge in contemporary Japan. Agriculture Human Values. 2011;28:465-482. DOI: https://doi.org/10.1007/s10460-010-9286-6
Morita T. Background and situation of dietary education In connection with the Basic Food Education Bill. Survey and Information (調査と情報). 2004;457:1-10 (in Japanese).
Uenaka O. Significance and issues of local production for local consumption in food and agriculture education. Educational Studies Review (教育学論究). 2013;7:47-53 (in Japanese).
Malik S, Agarwal A. Use of multimedia as a new educational technology tool A study. International Journal of Information and Education Technology. 2012;2:468-471. DOI: https://doi.org/10.7763/IJIET.2012.V2.181
Jeng T, Lee CH, Chen C, Ma, Y. Interaction and Social Issues in a Human Centered Reactive Environment. In: Proceedings of 7th International Conference on Computer Aided Architectural Design Research in Asia (CAADRIA), Cyberjaya, Malaysia, Apr. 18-20, 2002; 258-292.
Crowley JL, Coutaz J. Vision for Man Machine Interaction. In: Proceedings of IFIP International Conference on Engineering for Human-Computer Interaction (EHCI’95), Grand Targhee, Wyoming, USA, Aug. 1995; 28-45. DOI: https://doi.org/10.1007/978-0-387-34907-7_3
Turk M. Computer vision in the interface. Communications of the ACM. 2004;47:60-67.
Jaimes A, Sebe, N. Multimodal human–computer interaction: A survey. Computer Vision and Image Understanding. 2007;108:116-134.
Wilson AD. PlayAnywhere: A Compact Interactive Tabletop Projection-Vision System. In: Proceedings of 18th Annual ACM Symposium on User Interface Software and Technology (UIST '05), Seattle, WA, USA, Oct. 23-27, 2005; 83-92.
Hsu HMJ. The potential of Kinect in education. International Journal of Information and Education Technology. 2011;1:365-370. DOI: https://doi.org/10.7763/IJIET.2011.V1.59
Geller T. Interactive tabletop exhibits in museums and galleries. IEEE Computer Graphics Applications. 2006;26:6-11. DOI: https://doi.org/10.1109/MCG.2006.111
Fujii Y. Sagen Ishizuka’s dietary education and dietary method: A study on the intellectual framework of nutrition therapy 11. Bulletin of Faculty of Human Life Studies, Fuji Women's University. 2014;51:25-38 (in Japanese).
National Chengchi University Aboriginal Studies Center. Food farmers education in the United States. Aboriginal Education World. 2018;81:74-77 (in Chinese).
Petrini C. Slow Food Nation: Why Our Food Should Be Good, Clean, and Fair. New York: Rizzoli Publications; 2013.
Powell D, Agnew D, Trexler C. Agricultural literacy: Clarifying a vision for practical application. Journal of Food and agricultural education. 2008;49:85-98. DOI: https://doi.org/10.5032/jae.2008.01085
Asaoka N. Practice of New Environmental Education. Tokyo: Kobundo; 2005 (in Japanese).
NAAE. What is Food and agricultural education? https://www.naae.org/whatisaged/. Accessed 26 July 2020.
Kantowitz BH, Sorkin RD. Human Factors: Understanding People-System Relationships. John Wiley & Sons Inc.: Hoboken, NJ, USA; 1983.
Deng L, Wang G, Yu, S. Layout design of human-machine interaction interface of cabin based on cognitive ergonomics and GA-ACA. Computational Intelligence and Neuroscience. 2016;2016:1-12. DOI: https://doi.org/10.1155/2016/1032139
Ye J. Introduction to Interactive Design. Taipei: Artist; 2010 (in Chinese).
Jaimes A, Sebe, N. Multimodal human computer interaction: A survey. Computer Vision and Image Understanding. 2007;108:116-134. DOI: https://doi.org/10.1016/j.cviu.2006.10.019
Ahamed MM, Bakar ZBA. Triangle model theory for enhance the usability by user centered design process in human computer interaction. International Journal on Contemporary Computer Research. 2017;1:26-32.
Szeliski R. Computer Vision: Algorithms and Applications. New York: Springer; 2010.
Hu W, Tan T, Wang L, Maybank S. A survey on visual surveillance of object motion and behaviors. IEEE Trans. on Systems, Man, Cybernetics, Part C (Applications and Reviews). 2004;34:334-352. DOI: https://doi.org/10.1109/TSMCC.2004.829274
Ojha S, Sakhare S. Image Processing Techniques for Object Tracking in Video Surveillance-A Survey. In: Proceedings of 2015 International Conference on Pervasive Computing (ICPC), Pune, India, Jan. 09-10, 2015; 1-6. DOI: https://doi.org/10.1109/PERVASIVE.2015.7087180
Ragland K, Tharcis P, Wang L. A survey on object detection, classification and tracking methods. Engineering Research & Technology. 2014;3:622-628.
Iraola AB. Skeleton Based Visual Pattern Recognition: Applications to Tabletop Interaction. PhD Dissertation, the University of the Basque Country, Leioa, BI, Spain, 2009.
Crowley JL, Coutaz J, Berard F. Things that see. Communications of the ACM. 2000;43:54-64.
Wang CM, Wu TD. A new investigation of technology art A study of applying computer vision techniques to interactive context. Journal of National Taiwan College of Arts. 2005;76:113-130.
Hassenzahl M, Diefenbach S, Göritz A. Needs, affect, and interactive products – Facets of user experience. Interacting with Computers. 2010;22:353-362. DOI: https://doi.org/10.1016/j.intcom.2010.04.002
Pine BJ, Gilmore JH. Welcome to the experience economy. Harvard Business Review. 1998;76:97-105.
Mitchell A, Linn S, Yoshida H. A tale of technology and collaboration: Preparing for 21st-century museum visitors. Journal of Museum Education. 2019;44:242-252. DOI: https://doi.org/10.1080/10598650.2019.1621141
ReacTj (2009). ReacTj - ReacTable Trance Live Performance #2. https://www.youtube.com/watch?v=Mgy1S8qymx0. Accessed 5 July 2020.
TeamLab. (2013). A Table Where Little People Live. https://www.teamlab.art/w/kobitotable. Accessed 9 July 2020.
TeamLab. (2015). Worlds Unleashed and Then Connecting. https://www.teamlab.art/w/worlds-unleashed-restaurant/. Accessed 9 July 2020.
TeamLab. (2017). Connecting! Block Town. https://www.teamlab.art/w/block-town/. Accessed 9 July 2020.
Rumu Innovation (2018). Happy Farmer. https://www.rumuinno.com/happy-farmer. Accessed 9 July 2020.
Buchenau M, Suri JF. Experience Prototyping. In: Proceedings of the 3rd Conference on Designing Interactive Systems: Processes, Practices, Methods, and Techniques, New York, NY, USA, 2000; 424-433. DOI: https://doi.org/10.1145/347642.347802
Naumann JD, Jenkins AM. Prototyping: the new paradigm for systems development. MIS Quarterly. 1982;3:29-44.
Eliason AL. System Development: Analysis, Design, and Implementation. Northbrook, IL, USA: Scott Foresman & Co.; 1990.
Lidwell W, Holden K, Butler J. Universal Principles of Design, Revised and Updated: 125 Ways to Enhance Usability, Influence Perception, Increase Appeal, Make Better Design Decisions, and Teach Through Design. Beverly, MA, USA: Rockport; 2010.
Ye Z,Ye L. Research Methods and Essay Writing. Taipei: Shangding Culture; 1999 (in Chinese).
Yoo H, Kim H. A study on the media arts using interactive projection mapping. Contemporary Engineering Sciences. 2014;7:1181-1187. DOI: https://doi.org/10.12988/ces.2014.49147
Ministry of Agriculture, Forestry and Fisheries, Japan (2011). Food Education Basic Law. http://www.maff.go.jp/j/syokuiku/kannrennhou.html. Accessed 16 Apr. 2019.
Watanabe M, Nakamura O, Miyazaki A, Akinaga, Y. Current status of food education in school education. Nagasaki University Comprehensive Environmental Research. 2006;8:53-60 (in Japanese).
Sharma R, Pavlovic VI, Huang TS. Toward multimodal human-computer interface. Proceedings of the IEEE. 1998;86:853-869. DOI: https://doi.org/10.1109/5.664275
Rogers Y, Sharp H, Preece J. Interaction Design: Beyond Human-Computer Interaction. New York: Wiley; 2002. DOI: https://doi.org/10.1145/512526.512528
Gibbon D, Mertins I, Moore RK. Handbook of Multimodal and Spoken Dialogue Systems: Resources, Terminology and Product Evaluation. Berlin: Springer Science & Business Media; 2012.
Turk M. Computer vision in the interface. Communications of the ACM. 2004;47:60-67. DOI: https://doi.org/10.1145/962081.962107
Murthy G, Jadon R. Computer Vision Based Human Computer Interaction. Journal of Artificial Intelligence. 2011;4:245-246. DOI: https://doi.org/10.3923/jai.2011.245.256
Bobick AF, Intille SS, Davis JW, Baird F, Pinhanez CS, Campbell LW, Wilson A. The KidsRoom: A perceptually-based interactive and immersive story environment. Presence. 1999;8:369-393. DOI: https://doi.org/10.1162/105474699566297
Levin G. Computer vision for artists and designers: Pedagogic tools and techniques for novice programmers. AI & Society. 2006;20:462-482. DOI: https://doi.org/10.1007/s00146-006-0049-2
Crowley JL, Coutaz J, Berard F. Things that see. Communications of the ACM. 2000;43:54-64. DOI: https://doi.org/10.1145/330534.330540
Turk M, Kolsch M. (2004). Perceptual interfaces. In: Medioni G, Kang SB eds. Emerging Topics in Computer Vision. Englewood Cliffs, NJ, USA: Prentice Hall; 2004.
Su YR. Explore, experience, and interaction: The learning field of the museum for learning and playing together - Take the "New Farm Organic Fun and Fun Special Exhibition" in the South Gate Park of the Taiwan Expo as an example. Taiwan Museum Quarterly. 2016;35:42-49 (in Chinese).
Moment Factory (2017). TABEGAMI SAMA. https://momentfactory.com/work/all/all/tabegami-sama. Accessed 3 July 2019.
xXtralab (2016). A Nong’s fantastic adventure The New Farming and Organic LOHAS Exhibition. http://www.xxtralab.tw/tw/projects_post.php?id=39&nowTag=FEATURED#35. Accessed 5 July 2020.
Cinimod Studio (2017). FIRE & ICE. https://www.cinimodstudio.com/fire-and-ice. Accessed 6 July 2019.
Hush Studio (2012). University of Dayton Interactive Wall. https://vimeo.com/28178841. Accessed 9 July 2019.
Onformative (2012). Nikefuel Station. https://onformative.com/work/nike-fuel-station. Accessed 7 July 2019.
Moeslund TB, Hilton A, Kruger V. A survey of advances in vision-based human motion capture and analysis. Computer Vision and Image Understanding. 2006;104:90-126. DOI: https://doi.org/10.1016/j.cviu.2006.08.002
Shotton J, Sharp T, Kipman A, Fitzgibbon A, Finocchio M, Blake A, Moore R. Real-time human pose recognition in parts from single depth images. Communications of the ACM. 2013;56:116-124. DOI: https://doi.org/10.1145/2398356.2398381
Poppe R. Vision-based human motion analysis: An overview. Computer Vision and Image Understanding. 2007;108:4-18. DOI: https://doi.org/10.1016/j.cviu.2006.10.016
Copyright (c) 2021 Chao-Ming Wang, Yu-Hui Lin
This work is licensed under a Creative Commons Attribution 4.0 International License.
Please read copyrigth agreement care fully:-