Folgen
Harris Chan
Harris Chan
University of Toronto, Vector Institute
Bestätigte E-Mail-Adresse bei cs.toronto.edu - Startseite
Titel
Zitiert von
Zitiert von
Jahr
Large language models are human-level prompt engineers
Y Zhou, AI Muresanu, Z Han, K Paster, S Pitis, H Chan, J Ba
arXiv preprint arXiv:2211.01910, 2022
8402022
Inner monologue: Embodied reasoning through planning with language models
W Huang, F Xia, T Xiao, H Chan, J Liang, P Florence, A Zeng, J Tompson, ...
arXiv preprint arXiv:2207.05608, 2022
8162022
Maximum entropy gain exploration for long horizon multi-goal reinforcement learning
S Pitis*, H Chan*, S Zhao, B Stadie, J Ba
International Conference on Machine Learning, 7750-7761, 2020
1382020
Robotic Skill Acquisition via Instruction Augmentation with Vision-Language Models
T Xiao*, H Chan*, P Sermanet, A Wahid, A Brohan, K Hausman, S Levine, ...
arXiv preprint arXiv:2211.11736, 2022
692022
An empirical study of stochastic gradient descent with structured covariance noise
Y Wen, K Luk, M Gazeau, G Zhang, H Chan, J Ba
International Conference on Artificial Intelligence and Statistics, 3621-3631, 2020
57*2020
Steve-1: A generative model for text-to-behavior in minecraft
S Lifshitz, K Paster, H Chan, J Ba, S McIlraith
Advances in Neural Information Processing Systems 36, 2024
492024
ACTRCE: Augmenting Experience via Teacher's Advice For Multi-Goal Reinforcement Learning
H Chan, Y Wu, J Kiros, S Fidler, J Ba
arXiv preprint arXiv:1902.04546, 2019
442019
Large language models are human-level prompt engineers (2022)
Y Zhou, AI Muresanu, Z Han, K Paster, S Pitis, H Chan, J Ba
arXiv preprint arXiv:2211.01910, 2022
242022
An inductive bias for distances: Neural nets that respect the triangle inequality
S Pitis*, H Chan*, K Jamali, J Ba
arXiv preprint arXiv:2002.05825, 2020
242020
Large language models are human-level prompt engineers. arXiv
Y Zhou, AI Muresanu, Z Han, K Paster, S Pitis, H Chan, J Ba
Preprint posted online on November 3, 2022
202022
Vision-language models as a source of rewards
K Baumli, S Baveja, F Behbahani, H Chan, G Comanici, S Flennerhag, ...
arXiv preprint arXiv:2312.09187, 2023
182023
Learning domain invariant representations in goal-conditioned block mdps
B Han, C Zheng, H Chan, K Paster, M Zhang, J Ba
Advances in Neural Information Processing Systems 34, 764-776, 2021
172021
Auto-regressive Graph Generation Modeling with Improved Evaluation Methods
CC Liu, H Chan, K Luk, AI Borealis
Graph Representation Learning Workshop at Neural Information Processing …, 2019
132019
Steering large language models using APE
Y Zhou, AI Muresanu, Z Han, K Paster, S Pitis, H Chan, J Ba
NeurIPS ML Safety Workshop, 2022
52022
Investigating the impact of intrusion detection system performance on communication latency and power system stability
H Chan, E Hammad, D Kundur
Proceedings of the Workshop on Communications, Computation and Control for …, 2016
52016
Steve-1: A generative model for text-to-behavior in minecraft (abridged version)
S Lifshitz, K Paster, H Chan, J Ba, S McIlraith
NeurIPS 2023 Workshop on Goal-Conditioned Reinforcement Learning, 2023
42023
Multichannel Generative Language Model: Learning All Possible Factorizations Within and Across Channels
H Chan, J Kiros, W Chan
arXiv preprint arXiv:2010.04438, 2020
4*2020
ProtoGE: Prototype Goal Encodings for Multi-goal Reinforcement Learning
S Pitis, H Chan, J Ba
4th Multidisciplinary Conference on Reinforcement Learning and Decision Making, 2019
32019
LMAct: A Benchmark for In-Context Imitation Learning with Long Multimodal Demonstrations
A Ruoss, F Pardo, H Chan, B Li, V Mnih, T Genewein
arXiv preprint arXiv:2412.01441, 2024
2024
Temporary Goals for Exploration
H Xu, J Ba, S Pitis, H Chan
Deep Reinforcement Learning Workshop NeurIPS 2022, 0
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–20