\( \newcommand{\opt}{\mathrm{opt}} \newcommand{\eps}{\epsilon} \newcommand{\R}{\mathbb{R}} \newcommand{\vec}{\mathbf} \newcommand{\wstar}{\w^{\ast}} \newcommand{\x}{\vec x} \newcommand{\w}{\vec w} \newcommand{\wt}{\widetilde} \newcommand{\wh}{\widehat} \newcommand{\poly}{\mathrm{poly}} \newcommand{\polylog}{\mathrm{polylog}} \newcommand{\var}{\mathbf{Var}} \newcommand{\cov}{\mathbf{Cov}} \)
Logo

Giannis Iakovidis

Computer Sciences Department University of Wisconsin–Madison

I'm a 3rd-year PhD student in the Computer Sciences Department at the University of Wisconsin–Madison, where my PhD advisor is Ilias Diakonikolas. I completed my undergraduate studies at the School of Electrical and Computer Engineering of Aristotle University of Thessaloniki (AUTH). I earned my Master's through the inter-institutional (NTUA & NKUA) program ALMA (Algorithms, Logic and Discrete Mathematics), where I was advised by Christos Tzamos. During my undergrad, I used to teach university math competitions at AUTH under the supervision of Romanos Diogenes Malikiosis (see the course page).

I'm broadly interested in computational learning theory, robust statistics, and TCS in general. You can find my publications on Google Scholar.

Contact me at iakovidis[at]wisc.edu or iakoviid[at]gmail.com.

Publications

*Alphabetical order unless otherwise specified

  1. Testable Learning of General Halfspaces under Massart Noise [abstract] [arxiv] Ilias Diakonikolas, Giannis Iakovidis, Daniel M. Kane, and Sihan LiuarXiv preprint, 2026

    We study testable learning of general halfspaces with Massart noise under Gaussian marginals, giving the first tester-learner framework with near-optimal guarantees and quasi-polynomial complexity matching known SQ lower bounds up to polylogarithmic factors.

  2. Sample Complexity Bounds for Robust Mean Estimation with Mean-Shift Contamination [abstract] [arxiv] Ilias Diakonikolas, Giannis Iakovidis, Daniel M. Kane, and Sihan LiuarXiv preprint, 2026

    We study mean estimation under mean-shift contamination for general base distributions, giving essentially matching upper and lower sample complexity bounds under mild spectral conditions via Fourier-analytic techniques and the notion of a Fourier witness.

  3. Robust Learning of Multi-index Models via Iterative Subspace Approximation [abstract] [arxiv] Ilias Diakonikolas, Giannis Iakovidis, Daniel M. Kane, and Nikos ZarifisIn Proceedings of the 66th IEEE Symposium on Foundations of Computer Science (FOCS 2025)

    We design a robust learner for $K$-multi-index models under Gaussian marginals with label noise. The algorithm iteratively improves the estimated subspace using conditional low-degree moments, yielding agnostic learners for multiclass linear classifiers and intersections of halfspaces with polynomial complexity in $d$.

  4. Efficient Multivariate Robust Mean Estimation Under Mean-Shift Contamination [abstract] [arxiv] Ilias Diakonikolas, Giannis Iakovidis, Daniel M. Kane, and Thanasis PittasInternational Conference on Machine Learning (ICML 2025)

    We give the first computationally efficient algorithm for high-dimensional robust mean estimation in the mean-shift contamination model, achieving near-optimal sample complexity with accuracy guarantees and running in time polynomial in the sample size and dimension.

  5. Algorithms and SQ Lower Bounds for Robustly Learning Real-valued Multi-index Models [abstract] [arxiv] Ilias Diakonikolas, Giannis Iakovidis, Daniel M. Kane, and Lisheng RenThirty-Ninth Annual Conference on Neural Information Processing Systems (NeurIPS 2025)
    Selected for Spotlight Presentation

    We study learning real-valued multi-index models under adversarial label noise. We provide a general PAC learning algorithm (square loss) and complementary SQ lower bounds, clarifying when efficient learning is possible versus provably hard.

  6. Algorithms for Multiclass Learning with Corrupted Samples [abstract] [thesis] Giannis IakovidisM.Sc. Thesis, ALMA (NTUA & NKUA), 2023

    Master’s thesis on multiclass classification under various label-noise models, including algorithms and analyses for learning with partially corrupted labels.

Talks

Awards

Teaching and Service

Reviewer: JMLR 2024, NeurIPS 2025 Reliable ML Workshop.