Folgen
Reza Babanezhad
Reza Babanezhad
Samsung AI lab
Keine bestätigte E-Mail-Adresse - Startseite
Titel
Zitiert von
Zitiert von
Jahr
Stopwasting my gradients: Practical svrg
R Babanezhad Harikandeh, MO Ahmed, A Virani, M Schmidt, J Konečný, ...
Advances in Neural Information Processing Systems 28, 2015
1622015
Non-uniform stochastic average gradient method for training conditional random fields
M Schmidt, R Babanezhad, M Ahmed, A Defazio, A Clifton, A Sarkar
artificial intelligence and statistics, 819-828, 2015
1012015
M-ADDA: Unsupervised domain adaptation with deep metric learning
IH Laradji, R Babanezhad
Domain adaptation for visual understanding, 17-31, 2020
572020
Faster stochastic variational inference using proximal-gradient methods with general divergence functions
ME Khan, R Babanezhad, W Lin, M Schmidt, M Sugiyama
arXiv preprint arXiv:1511.00146, 2015
462015
A generic top-n recommendation framework for trading-off accuracy, novelty, and coverage
Z Zolaktaf, R Babanezhad, R Pottinger
2018 IEEE 34th International Conference on Data Engineering (ICDE), 149-160, 2018
382018
Reducing the variance in online optimization by transporting past gradients
S Arnold, PA Manzagol, R Babanezhad Harikandeh, I Mitliagkas, ...
Advances in Neural Information Processing Systems 32, 2019
232019
SVRG meets adagrad: Painless variance reduction
B Dubois-Taine, S Vaswani, R Babanezhad, M Schmidt, S Lacoste-Julien
Machine Learning 111 (12), 4359-4409, 2022
212022
An analysis of the adaptation speed of causal models
R Le Priol, R Babanezhad, Y Bengio, S Lacoste-Julien
International Conference on Artificial Intelligence and Statistics, 775-783, 2021
172021
Towards noise-adaptive, problem-adaptive (accelerated) stochastic gradient descent
S Vaswani, B Dubois-Taine, R Babanezhad
International conference on machine learning, 22015-22059, 2022
112022
Towards painless policy optimization for constrained mdps
A Jain, S Vaswani, R Babanezhad, C Szepesvari, D Precup
Uncertainty in Artificial Intelligence, 895-905, 2022
82022
Convergence of proximal-gradient stochastic variational inference under non-decreasing step-size sequence
ME Khan, R Babanezhad, W Lin, M Schmidt, M Sugiyama
J. Comp. Neurol 319, 359-386, 2015
82015
Process patterns for web engineering
R Babanezhad, YM Bibalan, R Ramsin
2010 IEEE 34th annual computer software and applications conference, 477-486, 2010
82010
MASAGA: A linearly-convergent stochastic first-order method for optimization on manifolds
R Babanezhad, IH Laradji, A Shafaei, M Schmidt
Machine Learning and Knowledge Discovery in Databases: European Conference …, 2019
72019
Infinite-dimensional optimization for zero-sum games via variational transport
L Liu, Y Zhang, Z Yang, R Babanezhad, Z Wang
International conference on machine learning, 7033-7044, 2021
62021
Target-based surrogates for stochastic optimization
JW Lavington, S Vaswani, R Babanezhad, M Schmidt, NL Roux
arXiv preprint arXiv:2302.02607, 2023
52023
Geometry-aware universal mirror-prox
R Babanezhad, S Lacoste-Julien
arXiv preprint arXiv:2011.11203, 2020
52020
To each optimizer a norm, to each norm its generalization
S Vaswani, R Babanezhad, J Gallego-Posada, A Mishkin, ...
arXiv preprint arXiv:2006.06821, 2020
52020
Semantics Preserving Adversarial Learning
OA Dia, E Barshan, R Babanezhad
arXiv preprint arXiv:1903.03905, 2019
42019
Towards noise-adaptive, problem-adaptive stochastic gradient descent
S Vaswani, B Dubois-Taine, R Babanezhad
32021
Infinite-dimensional game optimization via variational transport
L Liu, Y Zhang, Z Yang, R Babanezhad, Z Wang
OPT 2020, 2020
32020
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–20