グローバルナビゲーションへ

本文へ

フッターへ




二宮 洋 教授


[研究テーマ]
知能を持ったコンピュータ「人工知能」に関する研究

[主な担当科目]
人工知能、最適化数学、人工知能実習

[取得学位]
博士(工学)

専門・研究分野

人工知能、学習理論、最適化法

研究テーマ

知能を持ったコンピュータ「人工知能」に関する研究
生物は何億年もかけて進化してきました。この中で、脳(神経回路網)の進化は究極の進化であり、現在も進化を続けています。本研究室では、脳のモデルをコンピュータ上で実現することを研究課題としています。特に、ディープラーニングなどの「人工知能」を学習するための手法の確立を目指して研究を進めております。

研究キーワード

人工ニューラルネットワーク、学習アルゴリズム、ディープラーニング

SDGsとの関連


主な研究業績

<学術論文誌>
  1. Yasuda, Sota, Sendilkkumaar, Indrapriyadarsini, Ninomiya, Hiroshi, Kamio, Takeshi and Asai, Hideki:"addHessian: Combining quasi-Newton method with first-order method for neural network training", IEICE NOLTA, vol.13-N, issue 2, pp.361-366, Apr., 2022.
  2. Mahboubi, Shahrzad, Yamatomi, Ryo, Sendilkkumaar, Indrapriyadarsini, Ninomiya, Hiroshi, and Asai, Hideki:"On the Study of Memory-Less quasi-Newton Method with Momentum Term for Neural Network Training", IEICE NOLTA, vol.13-N, issue 2, pp.271-276, Apr., 2022.
  3. Sendilkkumaar, Indrapriyadarsini, Mahboubi, Shahrzad and Ninomiya, Hiroshi, Kamio Takeshi, and Asai, Hideki:"Accelerating Symmetric Rank-1 Quasi-Newton Method with Nesterov’s Gradient for Training Neural Networks", Algorithms, MDPI, 2022, Vol.15, issue 1, 6, https://doi.org/10.3390/a15010006, Jan., 2022.
  4. Mahboubi, Shahrzad, Sendilkkumaar, Indrapriyadarsini, Ninomiya, Hiroshi, and Asai, Hideki:"Momentum Acceleration of quasi-Newton based Optimization Technique for Neural Network Training", IEICE NOLTA, vol.12-N, issue 3, pp.554-574, July, 2021.
  5. Sendilkkumaar, Indrapriyadarsini, Mahboubi, Shahrzad and Ninomiya, Hiroshi, Kamio Takeshi, and Asai, Hideki:"A Nesterov’s Accelerated quasi-Newton method for Global Routing using Deep Reinforcement Learning", IEICE NOLTA, vol.12-N, issue 3, pp.323-335, July, 2021.
  6. Sendilkkumaar, Indrapriyadarsini, Mahboubi, Shahrzad and Ninomiya, Hiroshi, and Asai, Hideki:"aSNAQ : An Adaptive Stochastic Nesterov’s Accelerated Quasi-Newton Method for Training RNNs", IEICE NOLTA, vol.11, issue 4, pp.409-421, Oct., 2020.
  7. Mahboubi, Shahrzad, Sendilkkumaar, Indrapriyadarsini,Ninomiya, Hiroshi,and Asai, Hideki:"A Robust quasi-Newton Training with Adaptive Momentum for Microwave Circuit Models in Neural Networks", Journal of Signal Processing, vol. 24, no. 1, pp.11-17, Jan., 2020.
  8. Mahboubi, Shahrzad and Ninomiya, H.:"A Novel Training Algorithm based on Limited-Memory quasi-Newton Method with Nesterov’s Accelerated Gradient in Neural Networks and its Application to Highly-Nonlinear Modeling of Microwave Circuit", International Journal On Advances in Software, vol.11, no.3&4, pp.323-334, Dec., 2018.
  9. Ninomiya, H.:"A Novel quasi-Newton-based Optimization for Neural Network Training incorporating Nesterov's Accelerated Gradient", IEICE NOLTA, vol.E8-N, no.4, pp.289-301, Oct., 2017.
  10. Kobayashi, M., Ninomiya, H., Miura, Y., and Watanabe S.:"Reconfigurable Dynamic Logic Circuit Generating t-Term Boolean Functions Based on Double-Gate CNTFETs", IEICE Trans. on Fundamentals., vol.E97-A, no.5, pp.1051-1058, May, 2014.
  11. Ninomiya, H., Kobayashi, M., Miura, Y. and Watanabe S.:"Reconfigurable Circuit Design based on Arithmetic Logic Unit Using Double-Gate CNTFETs", IEICE Trans. on Fundamentals., vol.E97-A, no.2, pp.675-678, Feb., 2014.
  12. Kato, J., Watanabe, S., Ninomiya, H., Kobayashi, M. and Miura, Y.:"Circuit Design of 2-Input Reconfigurable Dynamic Logic Based on Double Gate MOSFETs with Whole Set of 16 Functions", Contemporary Engineering Sciences, vol.7, no.2, pp.87-102, 2014.
  13. Kato, J., Watanabe, S., Ninomiya, H., Kobayashi, M. and Miura, Y.:"Circuit Design of Reconfigurable Dynamic Logic Based on Double Gate CNTFETs Forcusing on Number of States of Back Gate Voltages", Contemporary Engineering Sciences, vol.7, no.1, pp.39-52, 2014.
  14. Kobayashi, M., Ninomiya, H. and Watanabe S.:"Circuit Design of Reconfigurable Logic Based on Double-Gate CNTFETs", IEICE Trans. on Fundamentals., vol.E96-A, no.7, pp.1642-1644, July. 2013.
  15. Ninomiya, H., Kobayashi, M. and Watanabe S.:"Reduced Reconfigurable Logic Circuit Design based on Double Gate CNTFETs using Ambipolar Binary Decision Diagram", IEICE Trans. on Fundamentals., vol.E96-A, no.1, pp.356-359, Jan. 2013.
  16. 阿倍俊和,坂下善彦,二宮 洋:"階層型ニューラルネットワークの学習に対するonline/batch ハイブリッド型準ニュートン法の有効性に関する研究", Journal of Signal Processing, vol.16, no.5, pp.451-458, 2012年9月.
  17. 二宮 洋:"パラメータ化オンライン準ニュートン法による階層型ニューラルネットワークの学習", 信学論A, vol.J95-A, no.8, pp.698-703, 2012年8月.
  18. Ninomiya, H.:"Microwave Neural Network Models Using Improved Online quasi-Newton Training Algorithm", Journal of Signal Processing, vol.15, no.6, pp.483-488, Nov., 2011.
  19. 二宮 洋:"改良型オンライン準ニュートン法によるニューラルネットワークの学習", 信学論A, vol.J93-A, no.12, pp.828-832, 2010年12月.
  20. Ninomiya, H.:"A Hybrid Global/Local Optimization Technique for Robust Training and its Application to Microwave Neural Network Models", Journal of Signal Processing, vol.14, no.3, pp.213-222, May, 2010.
  21. Ninomiya, H., Numayama, K. and Asai, H.:"Two-stage Tabu Search for Nonslicing Floorplan Problem Represented by O-Tree", Journal of Signal Processing, vol. 11, no. 1, pp.17-24, Jan., 2007.
  22. 富田親弘,二宮 洋,浅井秀樹:"不動点ホモとピー法に基づく階層型ニューラルネットワークの学習アルゴリズム", 信学論A, vol.J89-A, no.1, pp.61-66, 2006年1月
  23. Ninomiya, H., Yamagishi, H. and Asai, H.:"Three-Dimensional Module Packing using 3DBSG Structure", Journal of Signal Processing, vol. 9, no. 6, pp.439-445, Nov., 2005.
  24. 吉田昌弘,二宮 洋,浅井秀樹:"階層型ニューラルネットワークの汎化能力向上を目的とした逐次最小二乗ローカル学習法", Journal of Signal Processing, vol. 9, no. 1, pp.79-86, 2005年
  25. Yoneyama, T., Ninomiya, H. and Asai, H.:"Design Method of Neural Networks for Limit Cycle Generator by Linear Programming", IEICE Trans. on Fundamentals., vol.E84-A, no.2, pp.688-692, Feb.2001
  26. Kamo, A., Ninomiya, H., Yoneyama, T. and Asai, H.:"A Fast Neural Network Simulator for State Transition analysis", IEICE Trans. on Fundamentals., vol.E82-A, no.9, pp.1796-1801, Sept. 1999
  27. Ninomiya, H., Kamo, A., Yoneyama, T. and Asai, H.:"A Fast Algorithm for Spatiotemporal Pattern Analysis of Neural Networks with Multivalued Logic", IEICE Trans. on Fundamentals., vol.E81-A, no.9, pp.1847-1852, Sept. 1998
  28. Yamamoto, H., Ninomiya, H. and Asai, H.:"A Neuro-Based Optimization Algorithm for Rectangular Puzzles", IEICE Trans. on Fundamentals., vol.E81-A, no.6, pp.1112-1118, June 1998
  29. 二宮 洋, 中山武司, 浅井秀樹:"アナログニューラルネットワークによる接触検出関数を用いたタイリング問題の解法", 電子情報通信学会論文誌, A分冊(信学論A), vol.J80-A, no.11, pp.1951-1959, 1997年11月
  30. Yamamoto, H., Nakayama, T., Ninomiya, H. and Asai, H.:"Neuro-Based Optimization Algorithm for Three Dimensional Puzzles", IEICE Trans. on Fundamentals., vol.E80-A, no.6, pp.1049-1054, June 1997
  31. Ninomiya, H. and Asai, H.:"Neural Networks for Digital Sequential Circuits", IEICE Trans. on Fundamentals., vol.E77-A, no.12, pp. 2112-2115, Dec. 1994
  32. Kamio, T., Ninomiya, H. and Asai, H. :"A Neural Net Approach to Discrete Walsh Transform" IEICE Trans. on Fundamentals., vol.E77-A, no.11, pp. 1882-1886, Nov. 1994
  33. Ninomiya, H. and Asai, H.:"Design and Simulation of Neural Network Digital Sequential Circuits", The institute of Electronic, Information and Communication Engineers Transactions on Fundamentals of Electronics, Communications and Computer Sciences(IEICE Trans. on Fundamentals), vol.E77-A, no.6, pp.968-976, June 1994
<国際会議プロシーディング>
  1. Mahboubi, Shahrzad and Ninomiya, Hiroshi:"Weight Difference Propagation for Stochastic Gradient Descent Learning", Proc. IARIA The Eighteenth International Multi-Conference on Computing in the Global Information Technology, ICCGI 2023, pp.??-??, March, 2023. (Virtual, Barcelona, Spain)
  2. Sendilkkumaar, Indrapriyadarsini, Mahboubi, Shahrzad and Ninomiya, Hiroshi, Takeshi Kamio and Asai, Hideki.:"A Stochatic Momentum Accelerated quasi-Newton Method for Neural Networks", Proc. 36th AAAI (Association for the Advancement of Artificial Intelligence) Conference on Artificial Intellegence, AAAI-2022, pp.12973-12974, Feb. 2022,
  3. Mahboubi, Shahrzad, Yamatomi, Ryo, Sendilkkumaar, Indrapriyadarsini, Ninomiya, Hiroshi,and Asai, Hideki:"On the Study of Memory-Less quasi-Newton Method with Momentum Term for Neural Network Training", Proc. 2021 IEICE Nonlinear Science Workshop (IEICE/NSW2021), Dec., 2021. (Online)
  4. Yasuda, Sota, Sendilkkumaar, Indrapriyadarsini, Ninomiya, Hiroshi, Kamio, Takeshi, and Asai, Hideki.:"addHessian: Combining quasi-Newton method with first-order method for neural network training", Proc. 2021 IEICE Nonlinear Science Workshop (IEICE/NSW2021), Dec., 2021. (Online)
  5. Sendilkkumaar, Indrapriyadarsini, Mahboubi, Shahrzad and Ninomiya, Hiroshi, Takeshi Kamio and Asai, Hideki.:"VLSI Physical Design Automation using Deep Reinforcement Learning", Poster presentation at WiML Workshop co-located with NeurIPS, Dec 2020,
  6. Sendilkkumaar, Indrapriyadarsini, Mahboubi, Shahrzad and Ninomiya, Hiroshi, Takeshi Kamio and Asai, Hideki.:"A Nesterov's Accelerated quasi-Newton method for Global Routing using Deep Reinforcement Learning", Proc. NOLTA 2020, pp.251–254, Nov., 2020. (Virtual)
  7. S Indrapriyadarsini, Shahrzad Mahboubi, Hiroshi Ninomiya, Takeshi Kamio, and Hideki Asai:"A Neural Network Approach to Analog Circuit Design Optimization Using Nesterov's Accelerated Quasi-Newton Method", Proc. IEEE/ISCAS 2020, DOI: 10.1109/ISCAS45731.2020.9181152, Oct., 2020. (Virtual, Seville, Spain)
  8. M. D. Sudeera H. Gunathilaka, Mahboubi, Shahrzad and Ninomiya, H.:"Acceleration Technique of Two-Phase Quasi-Newton method with Momentum for Optimization Problem", Proc. IARIA The Twelfth International Conference on Information, Process, and Knowledge Management, eKNOW 2020, pp.17-19, March, 2020. (Virtual, Valencia, Spain)
  9. Yasuda, Sota, Sendilkkumaar, Indrapriyadarsini, Mahboubi, Shahrzad and Ninomiya, Hiroshi, and Asai, Hideki.:"A Stochastic Variance Reduced Nesterov’s Accelerated Quasi-Newton Method", Proc. IEEE/ICMLA 2019, pp.1874-1879, Dec., 2019. (Boca Raton, Florida)
  10. Sendilkkumaar, Indrapriyadarsini, Mahboubi, Shahrzad and Ninomiya, Hiroshi, and Asai, Hideki.:"An Adaptive Stochastic Nesterov Accelerated Quasi Newton Method for Training RNNs", Proc. NOLTA 2019, pp.208–211, Dec., 2019. (Kuala Lumpur, Malaysia)
  11. Sendilkkumaar, Indrapriyadarsini, Mahboubi, Shahrzad and Ninomiya, Hiroshi, and Asai, Hideki.:"A Stochastic Quasi-Newton Method with Nesterov’s Accelerated Gradient", Machine Learning and KnowLedge Discovery in Databases, Lecture Notes in Artificial Intelligence, Springer, and Proc. The European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, ECML/PKDD 2019, Part I, pp.743-760, Sept., 2019. (Würzburg, Germany)
  12. Mahboubi, Shahrzad, Sendilkkumaar, Indrapriyadarsini,Ninomiya, Hiroshi,and Asai, Hideki:"Momentum acceleration of quasi-Newton Training for Neural Networks", Proc. The 16th Pacific Rim International Conference on Artificial Intelligence, PRICAI 2019, pp.268–281, Aug., 2019. (Yanuca Island, Cuvu, Fiji)
  13. Sendilkkumaar, Indrapriyadarsini, Mahboubi, Shahrzad and Ninomiya, Hiroshi, and Asai, Hideki.:"Implementation of a modified Nesterov's Accelerated quasi-Newton Method on Tensorflow", Proc. IEEE/ICMLA 2018, pp.1147-1154, Dec., 2018. (Orlando, Florida)
  14. Mahboubi, Shahrzad and Ninomiya, H.:"A Novel Quasi-Newton with Momentum Training for Microwave Circuit Models Using Neural Networks", Proc. IEEE/ICECS 2018, pp.629-632, Dec., 2018. (Bordeaux, FRANCE)
  15. Mahboubi, Shahrzad and Ninomiya, H.:"A Novel Training Algorithm based on Limited-Memory quasi-Newton Method with Nesterov's Accelerated Gradient for Neural Networks", Proc. IARIA The Tenth International Conference on Future Computational Technologies and Applications, FUTURE COMPUTING 2018, pp.1-3, Feb., 2018. (Barcelona, Spain)
  16. Ninomiya, H.:"Neural Network Training based on quasi-Newton Method using Nesterov’s Accelerated Gradient", Proc. IEEE TENCON 2016, pp.51-54, Nov., 2016. (Singapore)
  17. Ninomiya, H.:"A Novel quasi-Newton-based Training using Nesterov's Accelerated Gradient for Neural Networks", Proc. 2016 International Conference on Artificial Neural Networks (ICANN'16), Part II, LNCS 9887, pp.540, Sep., 2016. (Barcelona, Spain)
  18. Ninomiya, H.:"Distributed Robust Training of Multilayer Neural Netwroks Using Normalized Risk-Averting Error", Proc. 2014 IEEE Symposium Series on Computational Intelligence (IEEE/SSCI'14, IEEE/CCMB'14), pp.134-140, Dec., 2014. (Orlando, Florida)
  19. Kobayashi, M., Ninomiya, H., Miura, Y., and Watanabe, S.:"DRDLC Generating t-Term Boolean Functions Based on DG-CNTFETs", Proc. 2014 RISP International Workshop on Nonlinear Circuits, Communications and Signal Processing (NCSP'14), pp.81-84, March, 2014. (Honolulu, Hawaii)
  20. Miura, Y., Ninomiya, H., Kobayashi, M. and Watanabe, S.:"An Universal Logic-Circuit with Flip Flop Circuit Based on DG-CNTFET", Proc. 2013 IEEE Pacific Rim Conference on Communications, Computers and Signal Processing (PacRim13), pp.148-152, Aug., 2013. (Victoria, Canada)
  21. Ninomiya, H.:"Dynamic Sample Size Selection based quasi-Newton Training for Highly Nonlinear Function Approximation using Multilayer Neural Networks", Proc. IEEE&INNS/IJCNN'13, pp.1932-1937, Aug., 2013. (Dallas, TX)
  22. Ninomiya, H.:"Dynamic Sample Size Selection in Improved Online quasi-Newton Method for Robust Training of Feedforward Neural Networks", The Fifth International Conference on Advanced Cognitive Technologies and Applications (COGNITIVE2013), May, 2013. (Valencia, Spain)
  23. Kobayashi, M., Ninomiya, H., Matsushima, T., and Hirasawa, S.:"ALinear Time ADMM Decoding for LDPC Codes over MIMO Channels", Proc. 2013 RISP International Workshop on Nonlinear Circuits, Communications and Signal Processing (NCSP'13), pp.185-188, March, 2013. (The Island of Hawaii, Hawaii)
  24. Kobayashi, M., Ninomiya, H., Matsushima, T., and Hirasawa, S.:"An Error Probability Estimation of the Document Classification Using Markov Model", Proc. 2012 International Symposium on Information Theory and its Applications (ISITA'12), pp.712-716, Oct., 2012. (Honolulu, Hawaii)
  25. Ninomiya, H.:"Parameterized Online quasi-Newton Training for High-Nonlinearity Function Approximation using Multilayer Neural Networks", Proc. IEEE&INNS/IJCNN'11, pp.2770-2777, Aug., 2011. (San Jose, California)
  26. Ninomiya, H.:"An Improved Online quasi-Newton Method for Robust Training and its application to Microwave Neural Network Models", Proc. IEEE&INNS/IJCNN'10(WCCI'10), pp.792-799, July, 2010. (Barcelona, Spain)
  27. Ninomiya, H.:"A Hybrid Global/Local Optimization Technique for Robust Training of Microwave Neural Network Models", Proc. IEEE/CEC'09(2009 IEEE Congress on Evolutionary Computation), pp.2956-2962, May, 2009. (Trondheim, Norway)
  28. Ninomiya, H., Wan, S., Kabir, H., Zhang, X., and Zhang, Q.J.:"Robust training of microwave nrural neteork models using combined global/local optimization techniques", Digest 2008 IEEE International Microwave Symposium(IEEE/IMS'08), pp.995-998, June, 2008. (Atlanta, Georgia)
  29. Ninomiya, H., Zhang, Q.J.:"Particle with Ability of Local search Swarm Optimization:PALSO for training of feedforward neural networks", Proc. IEEE&INNS/IJCNN'08, pp.3008-3013, June, 2008. (Hong Kong)
  30. Ninomiya, H., Numayama, K. and Asai, H.:"Two-staged Tabu Search for Floorplan Problem Using O-Tree Representation", Proc. IEEE/CEC'06(2006 IEEE World Congress on Computational Intelligence), pp.2733-2739, July, 2006. (Vancouver)
  31. Yamagishi, H., Ninomiya, H. and Asai, H.:"Three Dimensional Module Packing by Simulated Annealing", Proc. IEEE/CEC'05(2005 IEEE Congress on Evolutionary Computation), vol.2 of 3, pp.1069-1074, Sep., 2005. (Edinburgh, Scotland)
  32. Ninomiya, H., Tomita, C. and Asai, H.:"An Efficient Learning Algorithm for Finding Multiple Solutions Based on Fixed-Point Homotopy Method", Proc. IEEE&INNS/IJCNN'05, pp.978-983, July, 2005. (Montreal, Quebec)
  33. Numayama, K., Ninomiya, H. and Asai, H.:"Genetic Method for Floorplan by O-Tree in Consideration of Initial Solution", Proc. NCSP'05(RISP 2005 International Workshop on Nonlinear Circuits and Signal Processing), pp.207-210, March, 2005 (Honolulu, Hawaii)
  34. Yamagishi, H., Ninomiya, H. and Asai, H.:"Three Dimensional Module Packing using 3DBSG Structure", Proc. NCSP'05(RISP 2005 International Workshop on Nonlinear Circuits and Signal Processing), pp.195-198, March, 2005 (Honolulu, Hawaii)
  35. Yoshida, M., Ninomiya, H. and Asai, H.:"Regularizer for Local RLS Training Algorithm of Multilayer Feedforward Neural Networks", Proc. NCSP'04(RISP 2004 International Workshop on Nonlinear Circuits and Signal Processing), pp.137-140, March 2004 (Honolulu, Hawaii)
  36. Tomita, C., Yoshida, M., Ninomiya, H. and Asai, H.:"Note on learning algorithm based on Newton homotopy method for feedforward neural networks", Proc. NCSP'04(RISP 2004 International Workshop on Nonlinear Circuits and Signal Processing), pp.128-132, March 2004 (Honolulu, Hawaii)
  37. Ninomiya, H., Tomita, C. and Asai, H.:"An Efficient Learning Algorithm with Second-Order Convergence for Multilayer Neural Networks", Proc. IEEE&INNS/IJCNN'03, pp.2028-2032, July 2003 (Portland, Oregon)
  38. Ninomiya, H. and Sasaki, A.:"A Study on Generalization Ability of 3-Layer Recurrent Neural Networks", Proc. IEEE&INNS/IJCNN'02, pp.1063-1068, May 2002 (Honolulu, Hawaii)
  39. Ninomiya, H. and Sasaki, A.:"3-Layer Recurrent Neural Networks and their Supervised Learning Algorithm", Proc. IEEE&INNS/IJCNN'01, CD-ROM, July 2001 (Washington D.C.)
  40. Yoneyama, T., Ninomiya, H. and Asai, H.:"Design of Neuro-Based Limit Cycle Generator By Hysteresis Neural Networks", Proc. IEEE&INNS/IJCNN'00, CD-ROM, July 2000
  41. Yoneyama, T., Ninomiya, H. and Asai, H.:"Design of Neuro-Based Limit Cycle Generator Using Linear Programming Method", Proc. NOLTA'99, Nov. 1999
  42. Yoneyama, T., Ninomiya, H. and Asai, H.:"Design method of limit cycle generator by cyclic connected neural networks", Proc. ECCTD(European Conference on Circuit Theory and Design)'99, pp.535-538, Aug. 1999
  43. Yoneyama, T., Ninomiya, H. and Asai, H.:"Design method of neural networks for limit cycle generator", Proc. IEEE&INNS/IJCNN'99, CD-ROM, July 1999 (Washington D.C.)
  44. Ninomiya, H. and Kinoshita, N.:"A New Learning Algorithm without Explicit Error Back-Propagation", Proc. IEEE&INNS/IJCNN'99, CD-ROM, July 1999 (Washington D.C.)
  45. Kamo, A., Ninomiya, H., Yoneyama, T. and Asai, H.:"Neural Network Simulator for Spatiotemporal Pattern Analysis", Proc. IEEE/ICECS'98, vol.2, pp.109-112, Sept. 1998
  46. Kamo, A., Ninomiya, H., Yoneyama, T. and Asai, H.:"A Fast Neural Network Simulator for State Transition analysis", Proc. NOLTA'98, pp.1181-1184, Sept. 1998
  47. Yamamoto, H., Ninomiya, H. and Asai, H.:"Application of Neuro-Based Optimization to 3-D Rectangular Puzzles", Proc. IEEE&INNS/IJCNN'98 (WCCI'98), CD-ROM, May 1998 (Anchorage, Alaska)
  48. Yoneyama, T., Ninomiya, H. and Asai, H.:"Design of 3-valued Neural Networks with Cyclic Connection for Limit Cycle Generator", Proc. IEEE&INNS/IJCNN'98 (WCCI'98), CD-ROM, May 1998 (Anchorage, Alaska)
  49. Ninomiya, H., Kamo, A., Yoneyama, T. and Asai, H.:"An Efficient Algorithm for Spatiotemporal Pattern Analysis of Multivalued Neural Networks", Proc. IEEE&INNS/IJCNN'98 (WCCI'98), CD-ROM, May 1998 (Anchorage, Alaska)
  50. Yoneyama, T., Ninomiya, H. and Asai, H.:"Design of 3-Valued Neural Networks with Cyclic Connection for Limit Cycle Generator", Proc. NOLTA'97, vol.1 of 2, pp.277-280, Nov. 1997 (Honolulu, Hawaii)
  51. Ninomiya, H., Kamo, A., Yoneyama, T. and Asai, H.:"A Fast Algorithm for Spatiotemporal Pattern Analysis of Neural Networks with Multivalued Logic", Proc. NOLTA'97, vol.1 of 2, pp.273-276, Nov. 1997 (Honolulu, Hawaii)
  52. Yamamoto, H., Nakayama, T., Ninomiya, H. and Asai, H.:"Application of Neuro-Based Optimization Algorithm to Three Dimensional Cylindric Puzzles", Proc. IEEE/ICNN'97, vol.2 of 4, pp.1246-1250, June 1997 (Houston, Texas)
  53. Yamamoto, H., Ninomiya, H. and Asai, H.:"A Neuro-Based Optimization Algorithm for Rectangular Puzzles", Proc. IEICE/ITC-CSCC'97, vol.2 of 2, pp.1029-1032, June 1997
  54. Kamio, T., Adachi, H., Ninomiya, H. and Asai, H.:"A Design Method of DWT Analog Neuro Chip for VLSI Implementation", Proc. IEEE/IMTC'97, vol.2 of 2, pp.1210-1214, May 1997
  55. Kamio, T., Adachi, H., Ninomiya, H. and Asai, H.:"Discrete Walsh Transform Neuro Chip Using OTA Circuits", Proc. ICONIP'96, vol.2 of 2, pp.1063-1068, Sept. 1996 (Hong Kong)
  56. Ninomiya, H. and Asai, H.:"Recurrent Neural Networks for Digital Sequential Circuits", Proc. ICONIP'96, vol.1 of 2, pp.541-546, Sept. 1996 (Hong Kong)
  57. Ninomiya, H., Egawa, K., Kamio, T. and Asai, H.:"Design and Implementation of Neural Network Logic Circuits with Global Convergence", Proc. IEEE/ICNN'96, vol.2 of 4, pp.980-985, June 1996 (Washington D.C.)
  58. Kamio, T., Ninomiya, H. and Asai, H.:"Design and Implementation of Neuro-Based Discrete Walsh Transform Processor", Proc. IEEE/ICNN'96, vol.2 of 4, pp.926-931, June 1996 (Washington D.C.)
  59. Asai, H., Nakayama, T. and Ninomiya, H.:"Tiling Algorithm with Fitting Violation Function for Analog Neural Array", Proc. IEEE/ICNN'96, vol.1 of 4, pp.565-570, June 1996 (Washington D.C.)
  60. Yamamoto, H., Nakayama, T., Ninomiya, H. and Asai, H.:"Neuro-Based Optimization Algorithm for Three Dimensional Puzzles", Proc. IEICE/ITC-CSCC'96, vol.1 of 2, pp.377-380, June 1996 (South Korea)
  61. Nakayama, T., Ninomiya, H. and Asai, H.:"Neuro-Based Tiling Algorithm Using Fitting Violation Function of Polyominoes", Proc.IEEE/ISCAS'96, vol.4, May 1996 (Atlanta, Georgia)
  62. Onodera, K., Kamio, T., Ninomiya, H. and Asai, H.:"Application of Hopfield Neural Networks with External Noise to TSPs", Proc. NOLTA'95, vol.1 of 2, pp.375-378, Dec. 1995 (Las Vegas, Nevada)
  63. Ninomiya, H., Sato, K., Nakayama, T. and Asai, H.:"Neural Network Approach to Traveling Salesman Problem Based on Hierarchical City Adjacency", Proc. IEEE/ICNN'95, vol.5 of 6, pp.2626-2631, Nov. 1995 (Perth, Australia)
  64. Asai, H., Onodera, K., Kamio, T. and Ninomiya, H.:"A Study of Hopfield Neural Networks with External Noises", Proc. 1995 IEEE International Conference on Neural Networks (IEEE/ICNN'95), vol.4 of 6, pp.1584-1589, Nov. 1995 (Perth, Australia)
  65. Ninomiya, H. and Asai, H.:"Orthogonalized Steepest Descent Method for Solving Nonlinear Equations", Proc. IEEE/ISCAS'95, vol.1 of 3, pp.740-743, April 1995 (Seattle, Washington)
  66. Kamio, T., Ninomiya, H. and Asai, H.:"Convergence of Hopfield Neural Network for Orthogonal Transformation", Proc. 1995 IEEE International Symposium on Circuits and Systems (IEEE/ISCAS'95), vol.1 of 3, pp.493-496, April 1995 (Seattle, Washington)
  67. Asai, H., Kamio, T. and Ninomiya, H.:"Discrete Walsh Transform Processor Based on Hopfield Neural Network", Proc. 1995 IEEE Instrumentation & Mesurement Technology Conference (IEEE/IMTC'95), , pp.317-322, April 1995
  68. Kamio, T., Ninomiya, H. and Asai, H.:"Discrete Walsh Transform by Linear Programming Neural Net", Proc. ICONIP'94, vol.2 of 3, pp.809-814, Oct. 1994 (Seoul, South Korea)
  69. Nakayama, T., Ninomiya, H. and Asai, H.:"Neuro-Based Optimization for Tiling Problem", Proc. 1994 International Conference on Neural Information Processing (ICONIP'94), vol.2 of 3, pp.787-792, Oct. 1994 (Seoul, South Korea)
  70. Ninomiya, H. and Asai, H.:"Neural Networks for Digital Sequential Circuits", Proc. IEICE/NOLTA(International Symposium on Nonlinear Theory and its Application)'93, vol.2 of 4, pp.507-510, Nov. 1993 (Honolulu, Hawaii)
  71. Ninomiya, H. and Asai, H.:"Design and Simulation of Neural Network Digital Sequential Circuits", Proc. IEICE/JTC-CSCC'93, vol.1 of 2, pp.91-96, July 1993 (Nara, Japan)

主な所属学会

電子情報通信学会、情報処理学会、信号処理学会、IEEE

連絡先

E-mail:ninomiya<at-mark>info.shonan-it.ac.jp

趣味

熱帯魚(珊瑚礁を作っている)

関連リンク

資料請求
page top