| 
					
						| 
								
									| 自动化技术、信息技术 |  |     |  |  
    					|  |  
    					| 基于倾向性分析的轨迹评测技术 |  
						| 金卓军, 钱徽, 朱淼良 |  
					| 浙江大学 计算机科学与技术学院, 浙江 杭州 310027 |  
						|  |  
    					| Trajectory evaluation method based on intention analysis |  
						| JIN Zhuo-jun, QIAN Hui, ZHU Miao-liang |  
						| College of Computer Science and Technology, Zhejiang University, Hangzhou 310027, China |  
					
						| 
								
									|  
          
          
            
                         
            
									            
									                
																														  
																| [1] GOYAT Y, CHATEAU T, MALATERRE L, et al. Vehicle trajectories evaluation by static video sensors [C]∥ Proceedings of the 9th International IEEE Conference on Intelligent Transportation Systems. Toronto, Canada: IEEE, 2006: 864-869.[2] RUSSELL S. Learning agents for uncertain environments (extended abstract) [C]∥ Proceedings of the 11th Annual Conference on Computational Learning Theory. Madison, Wisconsin, USA: ACM, 1998: 101-103.
 [3] NG A, RUSSELL S. Algorithms for inverse reinforcement learning [C]∥ Proceedings of the 17th International Conference on Machine Learning. San Francisco, USA: Morgan Kaufmann Publishers, 2000: 663-670.
 [4] ABBEEL P A, NG A. Apprenticeship learning via inverse reinforcement learning [C]∥ Proceedings of the 21st International Conference on Machine Learning. Alberta, Canada: ACM, 2004: 1-8.
 [5] RATLIFF D N, BAGNELLM J A, ZINKEVICH M. Maximum margin planning [C]∥ Proceedings of the 23rd International Conference on Machine Learning. Pittsburgh, Pennsylvania, USA: Omnipress, 2006: 729-736.
 [6] SYED U, BOWLING M, SCHAPIRE R E. Apprenticeship learning using linear programming [C]∥ Proceedings of the 25th International Conference on Machine Learning. Helsinki, Finland.: Omnipress, 2008: 1032-1039.
 [7] RAMACHANDRAN D, AMIR E. Bayesian inverse reinforcement learning [C]∥ Proceedings of the 20th International Joint Conference on Artifical Intelligence. Hyderabad, India: AAAI Press, 2007: 2586-2591.
 [8] ZIEBART B, MAAS A, BAGNELL J, et al. Maximum entropy inverse reinforcement learning [C]∥ Proceedings of the 23rd National Conference on Artificial Intelligence. Chicago, Illinois: AAAI Press, 2008: 1433-1438.
 [9] NEU G C, SZEPESVARI C. Training parsers by inverse reinforcement learning [J]. Machine Learning, 2009, 77(2): 303-337.
 [10] NG A, HARADAS D, RUSSELL S. Policy invariance under reward transformations: theory and application to reward shaping [C]∥ Proceedings of the 16th International Conference on Machine Learning. Bled, Slovenia: Morgan Kaufmann Publishers, 1999: 278-287.
 [11] GOLUB G H, LOAN C F V. Matrix computations [M]. Baltimore: Johns Hopkins University Press, 1996.
 [12] SUTTON R S, BARTO A G. Reinforcement learning: an introduction [M]. Cambridge: MIT, 1998.
 [13] CHEN S Y, QIAN H, FAN J, et al. Modified reward function on abstract feature in inverse reinforcement learning [J]. Journal of Zhejiang University: Science C, 2010, 11(9): 718-723.
 |  
             
												
											    	
											        	|  | Viewed |  
											        	|  |  |  
												        |  | Full text 
 | 
 
 |  
												        |  |  |  
												        |  | Abstract 
 | 
 |  
												        |  |  |  
												        |  | Cited |  |  
												        |  |  |  |  
													    |  | Shared |  |  
													    |  |  |  |  
													    |  | Discussed |  |  |  |  |