Folgen
Bahare Kiumarsi
Bahare Kiumarsi
Assistant Professor, Michigan State University
Bestätigte E-Mail-Adresse bei msu.edu
Titel
Zitiert von
Zitiert von
Jahr
Optimal and autonomous control using reinforcement learning: A survey
B Kiumarsi, KG Vamvoudakis, H Modares, FL Lewis
IEEE transactions on neural networks and learning systems 29 (6), 2042-2062, 2017
8392017
Reinforcement Q-learning for optimal tracking control of linear discrete-time systems with unknown dynamics
B Kiumarsi, FL Lewis, H Modares, A Karimpour, MB Naghibi-Sistani
Automatica 50 (4), 1167-1175, 2014
5702014
Actor–critic-based optimal tracking for partially unknown nonlinear discrete-time systems
B Kiumarsi, FL Lewis
IEEE transactions on neural networks and learning systems 26 (1), 140-151, 2014
3332014
H∞ control of linear discrete-time systems: Off-policy reinforcement learning
B Kiumarsi, FL Lewis, ZP Jiang
Automatica 78, 144-152, 2017
2952017
Optimal tracking control of unknown discrete-time linear systems using input-output measured data
B Kiumarsi, FL Lewis, MB Naghibi-Sistani, A Karimpour
IEEE transactions on cybernetics 45 (12), 2770-2779, 2015
2432015
Safe reinforcement learning: A control barrier function optimization approach
Z Marvi, B Kiumarsi
International Journal of Robust and Nonlinear Control 31 (6), 1923-1940, 2021
1942021
Game theory-based control system algorithms with real-time reinforcement learning: How to solve multiplayer games online
KG Vamvoudakis, H Modares, B Kiumarsi, FL Lewis
IEEE Control Systems Magazine 37 (1), 33-52, 2017
1802017
Model-Free λ-Policy Iteration for Discrete-Time Linear Quadratic Regulation
Y Yang, B Kiumarsi, H Modares, C Xu
IEEE Transactions on Neural Networks and Learning Systems 34 (2), 635-649, 2021
1482021
Resilient and robust synchronization of multiagent systems under attacks on sensors and actuators
H Modares, B Kiumarsi, FL Lewis, F Ferrese, A Davoudi
IEEE transactions on cybernetics 50 (3), 1240-1250, 2019
1182019
Optimal output regulation of linear discrete-time systems with unknown dynamics using reinforcement learning
Y Jiang, B Kiumarsi, J Fan, T Chai, J Li, FL Lewis
IEEE transactions on cybernetics 50 (7), 3147-3156, 2019
1162019
Output synchronization of heterogeneous discrete-time systems: A model-free optimal approach
B Kiumarsi, FL Lewis
Automatica 84, 86-94, 2017
682017
Off-policy reinforcement learning: Optimal operational control for two-time-scale industrial processes
J Li, B Kiumarsi, T Chai, FL Lewis, J Fan
IEEE Transactions on Cybernetics 47 (12), 4547-4558, 2017
672017
Operational control of mineral grinding processes using adaptive dynamic programming and reference governor
X Lu, B Kiumarsi, T Chai, Y Jiang, FL Lewis
IEEE Transactions on Industrial Informatics 15 (4), 2210-2221, 2018
592018
Optimal control of nonlinear discrete time-varying systems using a new neural network approximation structure
B Kiumarsi, FL Lewis, DS Levine
Neurocomputing 156, 157-165, 2015
572015
Autonomy and machine intelligence in complex systems: A tutorial
KG Vamvoudakis, PJ Antsaklis, WE Dixon, JP Hespanha, FL Lewis, ...
2015 American Control Conference (ACC), 5062-5079, 2015
562015
Actor-critic off-policy learning for optimal control of multiple-model discrete-time systems
J Škach, B Kiumarsi, FL Lewis, O Straka
IEEE transactions on cybernetics 48 (1), 29-40, 2016
532016
Optimal tracking control for linear discrete-time systems using reinforcement learning
B Kiumarsi-Khomartash, FL Lewis, MB Naghibi-Sistani, A Karimpour
52nd IEEE Conference on Decision and Control, 3845-3850, 2013
402013
Heterogeneous formation control of multiple rotorcrafts with unknown dynamics by reinforcement learning
H Liu, F Peng, H Modares, B Kiumarsi
Information Sciences 558, 194-207, 2021
362021
H∞ control of nonaffine aerial systems using off-policy reinforcement learning
B Kiumarsi, W Kang, FL Lewis
Unmanned Systems 4 (01), 51-60, 2016
272016
Safe off-policy reinforcement learning using barrier functions
Z Marvi, B Kiumarsi
2020 American Control Conference (ACC), 2176-2181, 2020
222020
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–20