Folgen
Bahare Kiumarsi
Bahare Kiumarsi
Assistant Professor, Michigan State University
Bestätigte E-Mail-Adresse bei msu.edu
Titel
Zitiert von
Zitiert von
Jahr
Optimal and autonomous control using reinforcement learning: A survey
B Kiumarsi, KG Vamvoudakis, H Modares, FL Lewis
IEEE transactions on neural networks and learning systems 29 (6), 2042-2062, 2017
7092017
Reinforcement Q-learning for optimal tracking control of linear discrete-time systems with unknown dynamics
B Kiumarsi, FL Lewis, H Modares, A Karimpour, MB Naghibi-Sistani
Automatica 50 (4), 1167-1175, 2014
5002014
Actor–critic-based optimal tracking for partially unknown nonlinear discrete-time systems
B Kiumarsi, FL Lewis
IEEE transactions on neural networks and learning systems 26 (1), 140-151, 2014
3062014
H∞ control of linear discrete-time systems: Off-policy reinforcement learning
B Kiumarsi, FL Lewis, ZP Jiang
Automatica 78, 144-152, 2017
2472017
Optimal tracking control of unknown discrete-time linear systems using input-output measured data
B Kiumarsi, FL Lewis, MB Naghibi-Sistani, A Karimpour
IEEE transactions on cybernetics 45 (12), 2770-2779, 2015
2152015
Game theory-based control system algorithms with real-time reinforcement learning: How to solve multiplayer games online
KG Vamvoudakis, H Modares, B Kiumarsi, FL Lewis
IEEE Control Systems Magazine 37 (1), 33-52, 2017
1622017
Safe reinforcement learning: A control barrier function optimization approach
Z Marvi, B Kiumarsi
International Journal of Robust and Nonlinear Control 31 (6), 1923-1940, 2021
1342021
Model-Free λ-Policy Iteration for Discrete-Time Linear Quadratic Regulation
Y Yang, B Kiumarsi, H Modares, C Xu
IEEE Transactions on Neural Networks and Learning Systems 34 (2), 635-649, 2021
992021
Resilient and robust synchronization of multiagent systems under attacks on sensors and actuators
H Modares, B Kiumarsi, FL Lewis, F Ferrese, A Davoudi
IEEE transactions on cybernetics 50 (3), 1240-1250, 2019
992019
Optimal output regulation of linear discrete-time systems with unknown dynamics using reinforcement learning
Y Jiang, B Kiumarsi, J Fan, T Chai, J Li, FL Lewis
IEEE transactions on cybernetics 50 (7), 3147-3156, 2019
962019
Off-policy reinforcement learning: Optimal operational control for two-time-scale industrial processes
J Li, B Kiumarsi, T Chai, FL Lewis, J Fan
IEEE Transactions on Cybernetics 47 (12), 4547-4558, 2017
632017
Output synchronization of heterogeneous discrete-time systems: A model-free optimal approach
B Kiumarsi, FL Lewis
Automatica 84, 86-94, 2017
612017
Optimal control of nonlinear discrete time-varying systems using a new neural network approximation structure
B Kiumarsi, FL Lewis, DS Levine
Neurocomputing 156, 157-165, 2015
542015
Operational control of mineral grinding processes using adaptive dynamic programming and reference governor
X Lu, B Kiumarsi, T Chai, Y Jiang, FL Lewis
IEEE Transactions on Industrial Informatics 15 (4), 2210-2221, 2018
522018
Autonomy and machine intelligence in complex systems: A tutorial
KG Vamvoudakis, PJ Antsaklis, WE Dixon, JP Hespanha, FL Lewis, ...
2015 American Control Conference (ACC), 5062-5079, 2015
522015
Actor-critic off-policy learning for optimal control of multiple-model discrete-time systems
J Škach, B Kiumarsi, FL Lewis, O Straka
IEEE transactions on cybernetics 48 (1), 29-40, 2016
482016
Optimal tracking control for linear discrete-time systems using reinforcement learning
B Kiumarsi-Khomartash, FL Lewis, MB Naghibi-Sistani, A Karimpour
52nd IEEE Conference on Decision and Control, 3845-3850, 2013
342013
Heterogeneous formation control of multiple rotorcrafts with unknown dynamics by reinforcement learning
H Liu, F Peng, H Modares, B Kiumarsi
Information Sciences 558, 194-207, 2021
272021
H∞ control of nonaffine aerial systems using off-policy reinforcement learning
B Kiumarsi, W Kang, FL Lewis
Unmanned Systems 4 (01), 51-60, 2016
252016
Employing adaptive particle swarm optimization algorithm for parameter estimation of an exciter machine
A Darabi, A Alfi, B Kiumarsi, H Modares
202012
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–20