Publications


Academic Journal Articles

Please see my Japanese homepage for papers in Japanese.

  1. A. Mansur, K. Sakata, D. Das, and Y. Kuno, “Recognition of plain objects using local region matching,” IEICE Trans. Information and Systems, Vol.E91-D, No.7, pp.1906-1913, 2008.
  2. A. Mansur and Y. Kuno, “Specific and Class Object Recognition for Service Robots through Autonomous and Interactive Methods,” IEICE Trans. Information and Systems, Vol.E91-D, No.6, pp.1793-1803, 2008.
  3. M.A. Hossain, R. Kurnia, A. Nakamura, and Y. Kuno, Interactive object recognition through hypothesis generation and confirmation, IEICE Transactions on Information and Systems, Vol.E89-D, No.7, pp.2197-2206, 2006.
  4. R. Kurnia, M. A. Hossain, A. Nakamura, and Y. Kuno, Generation of efficient and user-friendly queries for helper robots to detect target objects, Advanced Robotics, Vo.20, No.5, pp.499-517, 2006.
  5. M.A. Hossain, R. Kurnia, A. Nakamura, and Y. Kuno, Interactive object   recognition system for a helper robot using photometric invariance, IEICE Transactions on Information and Systems, Vol.E88-D, No.11, pp.2500-2508, 2005.
  6. D. Miyauchi, A. Sakurai, A. Nakamura, and Y. Kuno, Bidirectional eye contact for human-robot communication, IEICE Transactions on Information and Systems, Vol.E88-D, No.11, pp.2509-2516, 2005.
  7. M.H. Jeong, Y. Kuno, N. Shimada, and Y. Shirai, Recognition of Two-Hand Gestures Using Coupled Switching Linear Model, IEICE Transactions on Information and Systems, Vol.E86-D, pp.1416-1425(2003. 8).
  8. Y. Kuno, N. Shimada, and Y. Shirai, " Look where you're going: A robotic wheelhcair based on the integration of human and environmental observations," IEEE Robotics and Automation, Vol.10, No.1, pp. 26-34, 2003.
  9. M.H. Jeong, Y. Kuno, N. Shimada, and Y. Shirai, " Recognition of Shape-Changing Hand Gestures," IEICE Transactions on Information and Systems, Vol.E85-D, pp.1678-1687, 2002.
  10. A. Iketani, A. Nagai, Y. Kuno, and Y. Shirai: Real-time surveillance system detecting persons in complex scenes, Real-Time Imaging, Vol.7, No.5, pp. 433-446, 2001.
  11. A. Nagai, Y. Kuno, and Y. Shirai : Detection of Moving Objects against a Changing Background, Systems and Computers in Japan, Vol.30, No.11, pp.107-116, 1999.
  12. Terence Chek Hion Heng, Yoshinori Kuno, and Yoshiaki Shirai: Active sensor fusion for collision avoidance in behavior-based mobile robots, IEICE Trans. Inf. & Syst. Vol.E81-D, No.5, pp.448-456, 1998.
  13. S. Maeda, Y. Kuno, and Y. Shirai : Mobile robot localization based on eigenspace analysis, Systems and Computers in Japan, Vol.28, No.12, pp.11-21, 1997.
  14. K.H. Jo, K. Hayashi, Y. Kuno, and Y. Shirai: Vision-based human interface system with world-fixed and human-centered frames using multiple view invariance, IEICE Trans. Inf. & Syst., Vol.E79-D, No.6, pp.799-808, 1996.
  15. H. Nakai, K. Fukui, and Y. Kuno : Detection of moving objects with three-level continuous modules, Systems and Computers in Japan, Vol.26, No.10, pp.97-109, 1995.
  16. I. Kweon, Y. Kuno, M. Watanabe, and K. Onoguchi: Sonar-based behaviors for a behavior-based mobile robot, IEICE Trans. Inf. & Syst., Vol.E76-D, No.4, pp.479-485, 1993.
  17. H. Kubota, Y. Okamoto, H. Mizoguchi, and Y. Kuno : Vision processor system for moving-object analysis, Machine Vision and Applications, Vol.7, No.1, pp.37-43, 1993.
  18. M. Watanabe, K. Onoguchi, Y. Kuno, and H. Asada : Obstacle detection by disparity prediction stereopsis method, Systems and Computers in Japan, Vol.22, No.10, pp.50-60, 1991.
  19. Y. Kuno, Y. Okamoto, and S. Okada : Robot vision using a feature search strategy generated from a 3-D object model, IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol.13, No.10, pp.1085-1097, 1991.
  20. J. Fujii,H. Sawada, T. Aizawa, K. Kato, M. Onoe, and Y. Kuno : Computerized processing of two-dimensional echocardiograms for the quantification of left ventricular regional contractility, Japanese Heart Journal, Vol.25, No.1, pp.31-43, 1984.
  21. Y. Kuno, H. Numagami, M. Ishikawa, H. Hoshino, Y. Nakamura, and M. Kidode : Robot vision implementation by high-speed image processor TOSPIX: Battery inspection, Robotica, Vol.1, pp.223-230, 1983.
  22. H. Sawada, J. Fujii, K. Kato, M. Onoe, and Y. Kuno: Three- dimensional reconstruction of the left ventricle from multiple cross sectional echocardiograms: Value for measuring left ventricular volume, British Heart Journal, Vol.50, pp.438-442, 1983.

Conference Proceedings

  1. Y. Kuno, K. Sakata, A. Mansur, and K. Yamazaki, “Robot Vision for Recognizing Complex Objects through Simple Interaction,” Mecatronics2008, 2008.
  2. A. Yamazaki, K. Yamazaki, Y. Kuno, M. Burdelski, M. Kawashima, and H. Kuzuoka, “Precision Timing in Human-Robot Interaction: Coordination of Head Movement and Utterance,” Proc. CHI2008, pp.131-140, 2008.
  3. Tajin R.T. and Kuno Y., Interactive Reference Resolution for Service Robots, Proc. FCV2008, CD-ROM, 2008.
  4. Mansur A., Sakata K., and Kuno Y., Recognition of Household Objects by Service Robots Through Interactive and Autonomous Methods, Proc. ISVC2007, pp.140-151, 2007.
  5. Sadazuka K., Kuno Y., Kawashima M. and Yamazaki K., Museum Guide Robot with Effective Head Gestures, Proc. ICCAS2007, pp.1168-1171, 2007.
  6. Yamazaki K., Kawashima M., Kuno Y., Akiya N., Burdelski M., Yamazaki A., and Kuzuoka H., Prior-to-Request and Request Behaviors within Elderly Day Care: Implications for Developing Service Robots for Use in Multiparty Settings, Proc. ECSCW2007, pp.61-78, 2007.
  7. Mansur A. and Kuno Y., Integration of Multiple Methods for Robust Object Recognition, Proc. SICE2007, pp.1990-1995, 2007.
  8. Tajin R.T. and Kuno Y., Effect of Horizontal Position on Detecting Persons Looking towards the Robot, Proc.SICE2007, pp. 347-350, 2007.
  9. KunoY., Sadazuka K., Kawashima M., Tsuruta S., Yamazaki K., and Yamazaki, A., Effective Head Gestures for Museum Guide Robot in Interaction with Humans, Proc. RO-MAN 07, pp.151-156, 2007.
  10. B. Zhan, N.D. Monekosso, P. Remagnino, T. Rukhsana, A. Mansur, and Y. Kuno, “Skin patch trajectories as scene dynamics descriptors,” Proc. MVA2007, pp. 315-318, 2007.
  11. Mansur A. and Kuno Y., Selection of Object Recognition Methods According to the Task and Object Category, Proc. MVA2007, pp. 388-392, 2007.
  12. Kuno Y.,Sadazuka K., Kawashima M., Yamazaki K., Yamazaki A. (Future University-Hakodate), and Kuzuoka H. (University of Tsukuba), Museum Guide Robot Based on Sociological Interaction Analysis, Proc. CHI2007, pp.1191-1994, 2007.
  13. Sadazuka K., Kawashima M., Kuno Y., and Yamazaki K., Museum Guide Robot Moving Its Head for Smooth Communication While Watching Visitors, Proc. 13th Korea-Japan Joint Workshop on Frontiers of Computer Vision, 2007.
  14. Niwa H., Akiya N., Kawashima M., Kuno Y., and Yamazaki K., Developing Helper Robot for Senior Care through Sociological Analysis of Human Interaction, Proc. 13th Korea-Japan Joint Workshop on Frontiers of Computer Vision, 2007.
  15. Mansur A., Hossain M.A., and Kuno Y., Object Recognition Based on Parallel Classifiers Using Oriented Features, Proc. ICECE 2006, CD-ROM, 2006.
  16. Mansur A., Hossain M.A., and Kuno Y., Integration of Multiple Methods for Class and Specific Object Recognition, Bebis, G. et al. Eds., Advances in Visual Computing (ISVC 2006), LNCS 4291, Springer, pp.841-849, 2006.
  17. Hisao Tsubota, Hitoshi Niwa, Yoshinori Kuno, Naonori Akiya, and Keiichi Yamazaki, Recognition of objects indicated by deictic pronouns for helper robots, SICE-ICASE International Joint Conference 2006, pp.1437-1440, Oct.18-21, Busan, Korea, 2006.
  18. Tomohiro Iwase, Rui Zhang, and Yoshinori Kuno, Robotic wheelchair moving with the caregiver, SICE-ICASE International Joint Conference 2006, pp.238-243, Oct.18-21, Busan, Korea, 2006.
  19. Y. Kuno, H. Skiguchi, T. Tsubota, S. Moriyama, K. Yamazaki, and A. Yamazaki, Museum guide robot with communicative head motion, Proc. 15th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN06, pp.33-38, 2006.
  20. R. Kurnia, M.A. Hossain, and Y. Kuno, Use of spatial reference systems in interactive object recognition, Proceedings of the Third Canadian Conference on Computer and Robot Vision, 2006.
  21. Yoshinori Kuno, Interactive Object Recognition for Helper Robots, NRL Workshop on Robust Robot Vision (Invited talk), 2006.
  22. M.A. Hossain, R. Kurnia, and Y. Kuno, Geometric and photometric analysis for interactively recognizing multicolor or partially occluded objects, G. Bebis, et al. Eds., Advances in Visual Computing, LNCS 3804, Springer, pp.134-142, 2005.
  23. M.A. Hossain, R. Kurnia, A. Nakamura, and Y. Kuno, Recovery from segmentation failures using photometric invariance in an interactive object recognition system, Proceedings of the IEEE Tencon'05, CD-ROM, 2005.
  24. M.A. Hossain, R. Kurnia, A. Nakamura, and Y. Kuno, Interactive vision to detect target objects for helper robots, Proceedings of the Seventh International Conference on Multimodal Interfaces, pp.293-300, 2005.
  25. Y. Kuno, C. Yamazaki, K. Yamazaki, and A. Yamazaki, Face direction control for a guide robot using visual information, The Ninth European Conference on Computer-Supported Cooperative Work (ECSCW) Extended Abstracts, pp.83-85, 2005.
  26. R. Kurnia, M.A. Hossain, A. Nakamura, and Y. Kuno, Using reference objects to specify position in interactive object recognition, Proceedings of the International Conference on Instrumentation, Communication and Information Technology, pp.709-714, 2005.
  27. Nakamura A., Tabata S., Ueda T., Kiyofuji S., and Kuno Y., Dance Training System with Active Vibro-Devices and a Mobile Image Display, Proc. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2005), pp.3827-3832 (2005.8).
  28. Y. Kuno, A. Nakamura, and D. Miyauchi, Beckoning robots with the eyes, Proceedings of the International Workshop on Intelligent Environments, pp.260-266, 2005.
  29. M.A. Hossain, R. Kurnia, A. Nakamura, and Y. Kuno, Object recognition using environmental cues mentioned explicitly or implicitly in speech, Proceedings of the Ninth IAPR Conference on Machine Vision Applications (MVA 2005), pp.320-323, 2005.
  30. Y. Kuno, D. Miyauchi, and A. Nakamura, Robotic method of taking the initiative in eye contact, CHI2005 Extended Abstracts, pp.1577-1580, 2005.
  31.  Akio Nakamura, Sou Tabata, Tomoya Ueda, Shinichiro Kiyofuji, and Yoshinori Kuno: ``Multimodal Presentation Method for a Dance Training System,'' CHI2005 Extended Abstracts, pp.1685-1688, 2005.
  32. Yoshinori Kuno, Akio Nakamura, Sou Tabata, Tomoya Ueda, and Shinichiro Kiyofuji: ``Multimodal Dance Training System based on Motion Analysis,'' Proceedings of the International Symposium on the CREST Digital Archiving Project, pp.120-125, Tokyo, Japan, March 8-9, 2005.
  33. Md. Altab Hossain, Rahmadi Kurnia, Akio Nakamura, and Yoshinori Kuno: ``Color Object Segmentation for Helper Robots,'' Proceedings of the 3rd International Conference on Electrical and Computer Engineering (ICECE 2004), pp.206-209/CD-ROM P051.pdf, Dhala, Bangladesh, December 28-30, 2004.
  34. Rahmadi Kurnia, Md. Hossain Altab, Akio Nakamura, and Yoshinori Kuno: ``Query Generation for Helper Robots to Recognize Objects,'' Proceedings of the IEEE Conference on Robotics, Automation and Mechatronics (RAM 2004), pp.939-944/CD-ROM 1180_0.pdf, Singapore, December 1-3, 2004.
  35. Yasuhiro Nakano, Akio Nakamura, and Yoshinori Kuno: ``Web Browser Controlled by Eye Movements,'' Proceedings of the IASTED International Conference on Advances in Computer Science and Technology (ACST 2004), pp.93-98/CD-ROM 431-047.pdf, St. Thomas, US Virgin Islands, November 22-24, 2004.
  36. Sou Tabata, Akio Nakamura, and Yoshinori Kuno: ``Development of an Easy Dance Teaching System Using Active Devices,'' Proceedings of the IASTED International Conference on Advances in Computer Science and Technology (ACST 2004), pp.38-43/CD-ROM 431-056.pdf, St. Thomas, US Virgin Islands, November 22-24, 2004.
  37. Tomohiro Iwase, Akio Nakamura, and Yoshinori Kuno: ``Robotic Wheelchair Understanding the User's Intention in Speech Using the Environmental Information,'' Proceedings of the IASTED International Conference on Advances in Computer Science and Technology (ACST 2004), pp.285-290/CD-ROM 431-042.pdf St. Thomas, US Virgin Islands, November 22-24, 2004.
  38. Yoshinori Kuno, Arihiro Sakurai, Dai Miyauchi, and Akio Nakamura: ``Two-way Eye Contact between Humans and Robots,'' Proceedings of the sixth International Conference on Multimodal Interfaces (ICMI 2004), pp.1-8/CD-ROM p1.pdf, State College, PA, USA, October 14-15, 2004.
  39. Tomoyuki Niwayama, Akio Nakamura, Sou Tabata, and Yoshinori Kuno: ``Mobile Robot System for Easy Dance Training,'' Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2004), pp.2223-2228/CD-ROM IROS04-1160.pdf, Sendai International Center, Sendai, Japan, September 28-October 2, 2004.
  40. Rahmadi Kurnia, Md. Altab Hossain, Akio Nakamura, and Yoshinori Kuno: ``Object Recognition through Human-Robot Interaction by Speech,'' Proceedings of the 13th IEEE International Workshop on Robot and Human Interactive Communication (RO-MAN 2004), /CD-ROM 105.pdf, Kurashiki, Okayama, Japan, September 20-22, 2004.
  41. Akio Nakamura, Tomoyuki Niwayama, Sou Tabata, and Yoshinori Kuno: ``Development of a Basic Dance Training System with Mobile Robots,'' Proceedings of the 13th IEEE International Workshop on Robot and Human Interactive Communication (RO-MAN 2004), /CD-ROM 036.pdf, Kurashiki, Okayama, Japan, September 20-22, 2004.
  42. Dai Miyauchi, Arihiro Sakurai, Akio Nakamura, and Yoshinori Kuno: ``Human-Robot Eye Contact through Observations and Actions,'' Proceedings of the 17th International Conference on Pattern Recognition, /CD-ROM V41A_4_02.pdf, Cambridge, UK, August 23-26, 2004
  43. Zaliyana Mohd Hanafiah, Chizu Yamazaki, Akio Nakamura, and Yoshinori Kuno: ``Understanding Inexplicit Utterances Using Vision for Helper Robots,'' Proceedings of the 17th International Conference on Pattern Recognition, /CD-ROM V44_2_04.pdf, Cambridge, UK, August 23-26, 2004
  44. Dai Miyauchi, Arihiro Sakurai, Akio Nakamura, and Yoshinori Kuno: ``Active Eye Contact for Human-Robot Communication,'' CHI2004 Extended Abstracts, pp.1099-1102/CD-ROM Disc2 2p1099.pdf, Vienna, Austria, April 24-29, 2004.
  45. Zaliyana Mohd Hanafiah, Chizu Yamazaki, Akio Nakamura, and Yoshinori Kuno: ``Human-Robot Speech Interface Understanding Inexplicit Utterances Using Vision,'' CHI2004 Extended Abstracts, pp.1321-1324/CD-ROM Disc2 2p1321.pdf, Vienna, Austria, April 24-29, 2004.
  46. Mitsutoshi Yoshizaki, Akio Nakamura, and Yoshinori Kuno: ``Vision-Speech System Adapting to the User and Environment for Service Robots,'' Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2003), pp.1290-1295/CD-ROM 123.pdf, Las Vegas, Nevada, October 27-31, 2003.
  47. Yoshinori Kuno, Mitsutoshi Yoshizaki, and Akio Nakamura: ``Vision-Speech System Becoming Efficient and Friendly through Experience,'' Proceedings of the IFIP TC13 International Conference on Human-Computer Interaction (INTERACT '03), pp.801-804, Zurich, Switzerland, September 1-5, 2003.
    (Human-Computer Interaction INTERACT '03, Eds. Matthias Rauterberg, Marino Menozzi, and Janet Wesson, IOS Press, 2003.)
  48. Yoshinori Kuno, Tomoyuki Yoshimura, Masashi Mitani, and Akio Nakamura: ``Robotic Wheelchair Looking at All People with Multiple Sensors,'' Proceedings of the IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI 2003), pp.341-346, National Center of Sciences, Tokyo, Japan, July 30-August 1, 2003.
  49. Y. Kuno, " Intelligent wheelchair looking at pedestrians and the caregiver," Constantine Stephanidis, Ed., Universal Access in HCI: Inclusive Design in the Information Society, pp.221-225, 2003.
  50. Y. Kuno, A. Nakamura, T. Murakami, T. Niwayama, and S. Tabata, "Motion analysis of dances and its applications," Proceedings of International Symposium on the CREST Digital Archiving Project, pp.183-193, 2003.
  51. Y. Kuno, and A. Nakamura, " Robotic wheelchair looking at all people," CHI2003 Extended Abstracts, CD-ROM, 2003.
  52. M. Yoshizaki, Y. Kuno, and A. Nakamura, " Mutual Assistance between Speech and Vision for Human-Robot Interface," Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, CD-ROM, 2002.
  53. T. Murakami, A. Nakamura, and Y. Kuno, "Generation of Digital Contents for Traditional Dances by Integrating Appearance and Motion Data," Proceedings of the Second IASTED International Conference on Visualization, Imaging, and Image Processing, pp.672-676, 2002.
  54. H. Kawasaki, A. Nakamura, and Y. Kuno, "Memory Aid System by Monitoring Everyday Life," Proceedings of the Second IASTED International Conference on Visualization, Imaging, and Image Processing, pp.662-667, 2002.
  55. T. Numajiri, A. Nakamura, and Y. Kuno, "Speed Browser Controlled by Eye Movements," Proceedings of the IEEE International Conference on Multimedia and Expo 2002, CD-ROM, 2002.
  56. M.H. Jeong, Y. Kuno, N. Shimada, and Y. Shirai, "Two-Hand Gesture Recognition using Coupled Switching Linear Model," Proceedings of the 16th International Conference on Pattern Recognition, CD-ROM, 2002.
  57. M.H. Jeong, Y. Kuno, N. Shimada, and Y. Shirai, ``Complex gesture recognition using coupled switching linear model,'' Proc. 5th Asian Conference on Computer Vision, pp.132--137, 2002.
  58. Y. Kuno, T. Murakami, N. Shimada, and Y. Shirai, ``User and social interfaces by observing human faces for intelligent wheelchairs,'' Proc. Workshop on Perceptive User Interfaces, CD-ROM, 2001.
  59. M. Yoshizaki, Y. Kuno, and A. Nakamura, ``Human-robot interface based on the mutual assistance between speech and vision,'' Proc. Workshop on Perceptive User Interfaces, CD-ROM, 2001.
  60. Y. Murakami, Y. Kuno, N. Shimada, and Y. Shirai, ``Collision avoidance by observing pedestrians' faces for intelligent wheelchairs,'' Proc. 2001 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp.2018--2023, 2001.
  61. M.H. Jeong, Y. Kuno, N. Shimada, and Y. Shirai, ``Recognition of shape-changing gestures based on switching linear model,'' Proc. 11th International Conference on Image Analysis and Processing, pp.14--19, 2001.
  62. Y. Kuno, "Detecting and tracking people in complex scenes," Proc. 2nd European Workshop on Advanced Video-based Surveillance Systems, pp.14-19, 2001.
  63. Yoshinori Kuno, Yoshifumi Murakami, Nobutaka Shimada, and Yoshiaki Shirai, "Intelligent wheelchair observing the faces of both user and pedestrians," Preprints of IFAC Workshop on Mobile Robot Technology, pp.232-237, 2001.
  64. Mun Ho Jeong, Yoshinori Kuno, Nobutaka Shimada, and Yoshiaki Shirai: Complex Hand-Gesture Recognition Using Active Contour and Switching Linear Model, Proc. 7th Korea-Japan Joint Workshop on Computer Vision, pp.151-156, 2001.
  65. Osamu Nishiyama, Shengshien Cheng, Yoshinori Kuno, Nobutaka Shimada, and Yoshiaki Shirai: Speech and Gesture Based Interface Using Context and Visual Information, Proc. 6th International Conference on Control, Automation, Robotics and Vision, CD-ROM, 2000.
  66. Yoshinori Kuno, Teruhisa Murashima, Nobutaka Shimada, and Yoshiaki Shirai: Understanding and Learning of Gestures through Human-Robot Interaction, Proc. 2000 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp.2133-2138, 2000.
  67. Shengshien Cheng, Yoshinori Kuno, Nobutaka Shimada, and Yoshiaki Shirai: Human-robot interface based on speech understanding assisted by vision, T. Tan, Y. Shi, and W. Gao (Eds.), Advances in Multimodal Interfaces - ICMI 2000, pp.16-23, 2000.
  68. Yoshifumi Murakami, Yoshinori Kuno, Nobutaka Shimada, and Yoshiaki Shirai: Intelligent Wheelchair Moving among People Based on Their Observations, Proc. 2000 International Conference on Systems, Man and Cybernetics, pp.1466-1471, 2000.
  69. Nobutaka Shimada, Yoshiaki Shirai, and Yoshinori Kuno: Model Adaptation and Posture Estimation of Moving Articulated Object Using Monocular Camera, Hans-Hellmut Nagel and Francisco J. Perales Lopez Eds., Articulated Motion and Deformable Objects, Lecture Notes in Computer Science 1899, Springer (Proc. First International Workshop, AMDO 2000), pp.159-172, 2000.
  70. Yoshinori Kuno, Teruhisa Murashima, Nobutaka Shimada, and Yoshiaki Shirai: Intelligent Wheelchair Remotely Controlled by Interactive Gestures, Proceedings of 15th International Conference on Pattern Recognition, Vol.4, pp.672-675, 2000.
  71. Nobutaka Shimada, Kousuke Kimura, Yoshiaki Shirai, and Yoshinori Kuno: Hand Posture Estimation by Combining 2-D Appearance-based and 3-D Model-based Approaches, Proceedings of 15th International Conference on Pattern Recognition, Vol.3, pp.709-712, 2000.
  72. Yoshinori Kuno, Teruhisa Murashima, Nobutaka Shimada, and Yoshiaki Shirai: Interactive Gesture Interface for Intelligent Wheelchairs, Proceedings of IEEE International Conference on Multimedia and Expo, 2000 (CD ROM).
  73. Nobutaka Shimada, Kousuke Kimura, Yoshinori Kuno, and Yoshiaki Shirai: 3-D Hand Posture Estimation by Indexing Monocular Silhouette Images, Proceeding of the 6th Workshop on Frontier of Computer Vision, pp.150-155, 2000.
  74. Kentaro Hayashi, Yoshinori Kuno, Nobutaka Shimada, and Yoshiaki Shirai: Recovery of Human Postures Using Robust Dynamic Calibration, Proceedings of the Fourth Asian Conference on Computer Vision, pp.49-56, 2000.
  75. Nobutaka Shimada, Kousuke Kimura, Yoshinori Kuno, and Yoshiaki Shirai: Image-based Measuring of Hand Postures with Adaptability to Individuals, Proceeding of International Conference on Mechatronic Technology, pp.404-409, 1999.
  76. Y. Kuno, S. Nakanishi, T. Murashima, N. Shimada, and Y. Shirai : Intelligent Wheelchair Based on the Integration of Human and Environment Observations, Proceedings of the 1999 IEEE International Conference on Information Intelligence and Systems, pp.342-349, 1999.
  77. S. Nakanishi, Y. Kuno, and Y. Shirai : Robotic wheelchair based on observations of both user and environment, Proc. 1999 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp.912-917, 1999.
  78. A. Iketani, Y. Kuno, N. Shimada, and Y. Shirai : Real-time surveillance system detecting persons in complex scenes, Proc. IAPR International Conference on Image Analysis and Processing, pp.1112-1115, 1999.
  79. Y. Kuno, S. Nakanishi, T. Murashima, N. Shimada, and Y. Shirai : Robotic wheelchair observing its inside and outside, Proc. IAPR International Conference on Image Analysis and Processing, pp.502-507, 1999.
  80. Y. Kuno, T. Ishiyama, S. Nakanishi, and Y. Shirai : Combining observations of intentional and unintentional behaviors for human-computer interaction, Proc. CHI 99, pp.238-245, 1999.
  81. Y. Kuno, S. Nakanishi, T. Murashima, N. Shimada, and Y. Shirai : Robotic wheelchair with three control modes, Proc. IEEE International Conference on Robotics and Automation, pp.2590-2595, 1999.
  82. Y. Kuno, Y. Adachi, T. Murashima, and Y. Shirai : Intelligent wheelchair looking at its user, Proc. Second International Conference on Audio- and Video-based Biometric Person Authentication, pp.84-89, 1999.
  83. Y. Kuno, T. Ishiyama, Y. Adachi, and Y. Shirai : Human interface systems using intentional and unintentional behaviors, Proc. 1998 Workshop on Perceptual User Interfaces, pp.55-58, 1998.
  84. T. Takahashi, S. Nakanishi, Y. Kuno, and Y. Shirai : Human-robot interface by verbal and nonverbal communication, Proc. 1998 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp.924-929, 1998.
  85. Y. Adachi, Y. Kuno, N. Shimada, and Y. Shirai : Intelligent wheelchair using visual information on human faces, Proc. 1998 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp.354-359, 1998.
  86. H. Takizawa, Y. Shirai, J. Miura, and Y. Kuno : Planning of observation and motion for interpretation of road intersection scenes considering uncertainty, Proc. 1998 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp.520-525, 1998.
  87. T. Takahashi, S. Nakanishi, Y. Kuno, and Y. Shirai : Helping computer vision by verbal and nonverbal communication, Proc. 14th International Conference on Pattern Recognition, pp.1216-1218, 1998.
  88. A. Iketani, A. Nagai, Y. Kuno, and Y. Shirai : Detecting persons on changing background, Proc. 14th International Conference on Pattern Recognition, pp.74-76, 1998.
  89. Y. Kuno, T. Ishiyama, K.H. Jo, N. Shimada, and Y. Shirai : Vision-based human interface system selectively recognizing intentional hand gestures, Proc. IASTED Int. Conf. on Computer Graphics and Imaging, pp.219-222, 1998.
  90. N. Shimada, Y. Shirai, Y. Kuno, and J. Miura : Hand gesture estimation and model refinement using monocular camera - Ambiguity limitation by inequality constraints, Proc. 3rd IEEE International Conference on Face and Gesture Recognition, pp.268-273, 1998.
  91. K.H. Jo, Y. Kuno, and Y. Shirai : Manipulative hand gesture recognition using task knowledge for human computer interaction, Proc. 3rd IEEE International Conference on Face and Gesture Recognition, pp.468-473, 1998.
  92. N. Shimada, Y. Shirai, Y. Kuno, and J. Miura : 3-D pose estimation and model refinement of an articulated object from a monocular image sequence, Proc. 3rd Asian Conference on Computer Vision, Vol.I, pp.672-679, 1998.
  93. K.H. Jo, Y. Kuno, and Y. Shirai : Context-based recognition of manipulative hand gestures for human computer interaction, Proc. 3rd Asian Conference on Computer Vision , Vol.II, pp.368-375, 1998.
  94. K. Hayashi, Y. Kuno, and Y. Shirai : Pointing gesture recognition system permitting user's freedom of movement, Proc. Workshop on Perceptual User Interfaces, pp.16-19 1997.
  95. T.C.H. Heng, Y. Kuno, and Y. Shirai : Combination of active sensing and sensor fusion for collision avoidance in mobile robots : Image Analysis and Processing, A.D. Bimbo Ed., Lecture Notes in Computer Science 1311, (Proc. 9th Int. Conf. on Image Anaysis and Processing, Vol. II), pp.568-575, 1997.
  96. S. Maeda, Y. Kuno, and Y. Shirai : Active navigation vision based on eigenspace analysis, Proc. 1997 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vol.2, pp.1018-1023, 1997.
  97. T.C.H. Heng, Y. Kuno, and Y. Shirai : Active sensor fusion for collision avoidance, Proc. 1997 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vol.3, pp.1244-1249, 1997.
  98. Y. Kuno : Two applications of computer vison: surveillance and human interface, International Workshop on Computer Vision Technology, pp.126-133, 1996 (invited paper).
  99. Y. Kuno, O. Takae, T. Takahashi, and Y. Shirai : Object recognition using multiple view invariance based on complex features, IEEE 1996 Workshop on Applications of Computer Vision, pp.129-134, 1996.
  100. K. Hayashi, Y. Kuno, and Y. Shirai : Human robot interface with appropriate frame selection, IAPR Workshop on Machine Vision Applications, pp.187-190, 1996.
  101. H. Takizawa, S. Shirai, Y. Kuno, and J. Miura : Recognition of intersection scene by attentive observation for a mobile robot, 1996 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp.1648-1654, 1996.
  102. A. Nagai, Y. Kuno, and Y. Shirai : Surveillance system based on spatio-temporal information, IEEE 1996 International Conference on Image Processing, Vol.II, pp.593-596, 1996.
  103. H. Takizawa, Y. Shirai, Y. Kuno, and J. Miura : IAPR/TC-8 Workshop on Machine Perception Applications, 1996.
  104. Y. Mae, Y. Shirai, J. Miura, and Y. Kuno : Object tracking in cluttered background based on optical flows and edges, 13th International Conference on Pattern Recognition, Vol.1, pp.196-200, 1996.
  105. K.H. Jo, Y. Kuno, and Y. Shirai : Invariance based human interface system using realtime tracker, Second Asian Conference on Computer Vision, pp.II-22-26, 1995.
  106. Y. Kuno, K. Hayashi, K.H. Jo, and Y. Shirai : Human-robot interface using uncalibrated stereo vision, 1995 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp.525-530, 1995.
  107. K.H. Jo, Y. Kuno, and Y. Shirai : Vision-based human interface system with world-fixed and human-centered frames, in Y. Anzai, K. Ogawa, and H. Mori (Editors), Symbiosis of Human and Artifact, Elsevier, pp.193-198, 1995.
  108. N. Shimada, Y. Shirai, and Y. Kuno : Hand gesture recognition using computer vision based on model-matching method, in Y. Anzai, K. Ogawa, and H. Mori (Editors), Symbiosis of Human and Artifact, Elsevier, pp.11-16, 1995.
  109. Y. Kuno, K.H. Jo, K. Hayashi, and Y. Shirai : Human-centered human-computer interface using multiple view invariants, International Workshop on Automatic Face- and Gesture-Recognition, pp.266-271, 1995.
  110. O. Takae, Y. Kuno, J. Miura, and Y. Shirai : Object recognition using conic-based invariants from multiple views, IAPR Workshop on Machine Vision Applications, pp.17-20, 1994.
  111. Y. Kuno, M. Sakamoto, K. Sakata, and Y. Shirai : Vision-based human interface with user-centered frame, 1994 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp.2023-2029, 1994.
  112. R. Cipolla, Y. Okamoto, and Y. Kuno : Robust structure from motion using motion parallax, IEEE 4th International Conference on Computer Vision, pp.374-382, 1993.
  113. R. Cipolla, Y. Okamoto, and Y. Kuno : Qualitative visual interpretation of 3D hand gestures using motion parallax, IAPR Workshop on Machine Vision Applications, pp. 477-482, 1992.
  114. K. Fukui, H. Nakai, and Y. Kuno : Multiple object tracking system with three level continuous processes, IEEE Workshop on Applications of Computer Vision, pp. 19-27, 1992.
  115. I. Kweon, Y. Kuno, M. Watanabe, and K. Onoguchi: Behavior-based intelligent robot in dynamic indoor environments, 1992 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp.1339-1346, 1992.
  116. I. Kweon, Y. Kuno, M. Watanabe, and K. Onoguchi: Behavior-based mobile robot using active sensor fusion, 1992 IEEE International Conference on Robotics and Automation, pp.1675-1682, 1992.
  117. M. Watanabe, K. Onoguchi, I. Kweon, and Y. Kuno: Architecture of behavior-based mobile robot in dynamic environment, 1992 IEEE International Conference on Robotics and Automation, pp.2711-2718, 1992.
  118. H. Kubota, Y. Okamoto, H. Mizoguchi, and Y. Kuno : Vision processor for moving object analysis, Computer Architecture for Machine Perception, ed. B. Zavidovique and P. Wendel (Proceedings of the Workshop on Computer Architecture for Machine Perception, Paris, 1991), pp.461-470, 1991.
  119. Y. Kuno, Y. Okamoto, and S. Okada : Object recognition using a feature search strategy generated from a 3-D model, IEEE 3rd International Conference on Computer Vision, pp.626-635, 1991.
  120. I. Kweon, M. Watanabe, K. Onoguchi, and Y. Kuno : Behavior-based mobile robot using multiple sensors, 1st Korea-Japan Joint Conference on Computer Vision, pp.260-267, 1991.
  121. R. Nakayama, H. Okano, H. Shimada, S. Okada, A. Kawamura, Y. Kuno, and T. Miyazawa : Advanced robot vision and operation system for nuclear power plant facilities, '91 International Symposium on Advanced Robot Technology, pp.247-254, 1991.
  122. R. Nakayama, H. Okano, Y. Kuno, T. Miyazawa, H. Shimada, S. Okada, and A. Kawamura : Robot vision for nuclear advanced robot, SMiRT 11 Transactions Vol. SD1, pp.337-342, 1991.
  123. H. Kubota, K. Fukui, M. Ishikawa, H. Mizoguchi, and Y. Kuno : Advanced vision processor with an overall image processing unit and multiple local image processing modules, MVA'90 IAPR Workshop on Machine Vision Applications, pp.401-404, 1990.
  124. Y. Okamoto, Y. Kuno, and S. Okada : Robot vision using object models for recognition strategy generation and precise localization, 16th Annual Conference of IEEE Industrial Electronics Society, pp.558-563, 1990.
  125. K. Onoguchi, M. Watanabe, Y. Okamoto, Y. Kuno, and H. Asada: A visual navigation system using a multi-information local map, 1990 IEEE International Conference on Robotics and Automation, pp.767-774, 1990.
  126. Y. Okamoto, Y. Kuno, K. Onoguchi, M. Watanabe, and H. Asada : Object recognition using a tree-like procedure generated from 3-D model, IAPR Workshop on Computer Vision, pp.441-446, 1988.
  127. Y. Kuno, K. Ikeuchi, and T. Kanade : Model-based vision by cooperative processing of evidence and hypotheses using configuration spaces, Proceedings of SPIE, Vol. 938, pp.444-453, 1988.
  128. Y. Kuno : Successful vision applications and their limitations, IAPR Workshop on Errors and Failures in Vision Systems, pp.33-40, 1987.
  129. M. Watanabe, K. Onoguchi, Y. Kuno, H. Hoshino, and S. Tsunekawa : Obstacle detection method for mobile robots with stereo vision, 5th Scandinavian Conference on Image Analysis, pp.325-334, 1987.
  130. Y. Kuno, H. Numagami, M. Ishikawa, H. Hoshino, and M. Kidode: Three-dimensional vision techniques for an advanced robot system, IEEE International Conference on Robotics and Automation, pp. 11-16, 1985.
  131. Y. Kuno, H. Numagami, M. Ishikawa, H. Hoshino, Y. Nakamura, and M. Kidode : Robot vision implementation by high-speed image processor TOSPIX, International Conference on Advanced Robotics, pp.163-170, 1983.
  132. M. Onoe and Y. Kuno : Digital processing of images taken by fish-eye lens, 6th International Conference on Pattern Recognition, pp.105-108, 1982.
  133. Y. Kuno : Digital processing of images taken by fish-eye lens, IEEE 1981-82 Student Papers, pp.225-230, 1982.
  134. M. Onoe and Y. Kuno : Recognition of adenocarcinoma in automated uterine cytology, 4th International Conference on Pattern Recognition, pp.883-885, 1978.

Back to Yoshinori Kuno's Homepage


kuno@cv.ics.saitama-u.ac.jp