.01

NEWS

Thesis submitted

11. September 2017

I have succesfully submitted my PhD thesis, which is currently with my oponnents. My viva should take place in November. After that, I will be officially finished with my PhD studies. In the meantime I have started as a part-time teacher at LEAF Academy, teaching a AP Calculus. I spend the rest of my work-time at the company Operam, where I switched my part-time role for a full-time position.

Webpage update

23. June 2017

I have updated my webpage after half a year. There were only two notable events since. I was a mentor at Basecamp - a Data Science bootcamp organized by Knoyd. I have presented Introduction to Machine Learning and Introduction to Optimization. The slides are available here and , respectively. The second event was a local presentation in Bratislava at a Machine Learning Meetup, where I gave a talk on The Role of Optimization in Machine Learning. The slides for the presentation are available here and a video-recording of the presentation is available here. Next up is the deadline of my PhD. thesis.

Back in Slovakia

5. January 2017

Since mid-December I have moved to Slovakia for good. I will be finishing my PhD studies remotely and handing in my thesis in August 2017. Goodbye Edinburgh! Also, I have two upcoming events. First, I will be a mentor at Basecamp, a Data Science Bootcamp in Vienna. I will be lecturing on introduction to machine learning and optimization. Secondly, I will have a talk in Bratislava during a brand new series called (no surprises here) Machine Learning Meetups. I will talk on the role of optimization in machine learning. Looking forward to both of these events!

IMA Numerical Linear Algebra and Optimization

9. September 2016

I have just returned from Birmingham, where I attended a conference on Numerical Linear Algebra and Optimization. Our group organized two minisymposia, and I gave a talk on Importance sampling for minibatches (paper, slides) in one of them. All in all, the conference was nice.

ESSAY

31. August 2016

I have prepared a two page essay as a part of my 2nd year report at the University of Edinburgh. The essay is basically a short report on my work and achievements during the first two years of my PhD. It should be accessible to all mathematicians. It is available here.

UPDATE

5. August 2016

After a long time of inactivity I have decided to update my webpage. It is up to date again! A lot of things happened in the meantime. My course finished and we had 25 people completing the whole week. I have two new papers. Jointly with my supervisor we have Coordinate Descent Faceoff: Primal or Dual? and as a joint work with my managers during my internship in Amazon we have Online optimization and regret guarantees for non-additive long-term constraints. Which brings me to the last point, which is my finishing internship in Amazon Berlin, which was an awesome experience. I do recommend it to everybody!

MATHEMATICS IN MACHINE LEARNING

14. March 2016

My course on Mathematics in Machine Learning has an updated webpage with an option to register and sylabus. Feel free to check it out and sign up!

OPTIMIZATION WITHOUT BORDERS

12. February 2016

I spent the last week in Les Houches on a very nice workshop devoted to 60th birthday of the great Yurii Nesterov. It was a great pleasure to be on a workshop with the top researchers in Statistical Learning. The weather was not very good, but we managed to get in a full day of skiing. Here is a photo of all the participants. What a week!

NEW PAPER OUT!

9. February 2016

Today we uploaded a new paper to arXiv. It is my first paper based on my very own idea. The paper is a joint work with my supervisor Peter. The paper is called "Importance sampling for Minibatches" and it can be found here. The paper is short and neat, I hope you will enjoy it as much as I do!

MATHEMATICS IN MACHINE LEARNING COURSE

23. January 2016

I am going to organize a course on Mathematics in Machine Learning at my "Alma-Mater" - Faculty of Mathematics, Physics and Informatics of the Comenius University. The course will be held between 8-17 April. All the information can be found here. It's going to be great!

THE THREE KINGS CONFERENCE

4. January 2016

Today I attended a local conference at my "Alma-Mater" - the Comenius University. I gave a talk on AdaSDCA. Also, I covered for Peter - because he got suddenly sick - and I gave a talk on Randomized Iterative Methods for Linear Systems.

HOME, SWEET HOME

7. December 2015

I am on my way back to Slovakia today. Quite early for a Christmas trip, but I have some unspent holidays for this year. Do not hesitate to contact me if you want to meet-up!

WORKSHOP

25. November 2015

I am attending a workshop on Distributed Machine Learning organized by The Alan Turing Institute. Lots of interesting people are presenting, it will be great!

SEMINAR TALK

25. October 2015

Next Tuesday (Oct 27) I am giving a talk on our reading seminar All Hands Meeting on Big Data Optimization. Motivated by my recent visit in Julien Mairal's group, I will be presenting this paper.

RESEARCH VISIT

17. October 2015

Beginning tomorrow, I am visiting Julien Mairal at INRIA in Grenoble, France. I will be there for a week. During the visit, I will have a talk on AdaSDCA at their seminar.

WE'RE ONLINE

1. August 2015

After a long fight with my lazyness, the webpage is ready.
.02

RESUME

  • EDUCATION
  • 2014
    2017
    Edinburgh

    PhD in Optimization and Operational Research

    UNIVERSITY OF EDINBURGH

    Postgraduate studies with focus on Large-scale optimization for Machine Learning applications.
  • 2011
    2014
    Bratislava

    Bc. in Mathematics

    COMENIUS UNIVERSITY

    Undergraduate studies of Mathematics with focus on Probability and Statistics.
  • OTHER EDUCATION
  • July 2015
    Tuebingen

    Machine Learning Summer School

    MAX PLANCK INSTITUTE FOR INTELLIGENT SYSTEMS

    Two-week summer school on Machine Learning with 20% acceptance rate.
  • June 2015
    Delphi

    Gene Golub Summer School

    SOCIETY FOR INDUSTRIAL AND APPLIED MATHEMATICS

    Two-week summer school on Randomized Numerical Linear Algebra with 50% acceptance rate.
  • 2013
  • PROFESSIONAL POSITIONS
  • 2017
    Bratislava

    Data Scientist

    OPERAM

    I had a part-time position as a consultant from January 2017. Since September 2017, I am a full-time data scientist.
  • 2016
    Berlin

    Machine Learning Applied Scientist Intern

    AMAZON

    I spent the summer working on a project about online ad allocation. As a side-project, I corrected some mistakes and added myself as a co-author to this paper
  • 2011
    2016
    Bratislava

    Marker

    SLOVAK MATHEMATICAL OLYMPIAD

    I was one of 6 people responsible for the marking of national rounds of mathematical olympiad. In total I marker more than 400 solutions.
  • 2013
    Bratislava

    Research Intern

    INNOVATRICS

    I was researching new algorithms for fingerprint recognition during a summer internship under the supervision of Jan Lunter - CTO of Innovatrics.
  • TEACHING
  • 2017
    Bratislava

    AP Calculus Teacher

    LEAF Academy

    I am responsible for one class of 15 students, preparing them for AP Calculus. It is a part time job with 3.5 hours of teaching each week.
  • 2017
    Vienna

    Mentor

    Vienna / Knoyd

    I have been a mentor for a Data Science Bootcamp organized by Knoyd. I have presented 6 hours of lectures and 6 hours of tutorials, on the topic of Introduction to Machine Learning and Introduction to Optimization. I was voted as the best external mentor of the bootcamp.
  • 2014
    2016
    Edinburgh

    Teaching Assistant

    UNIVERSITY OF EDINBURGH

    Fall 2016
    • Numerical Linear Algebra and Applications (Bc. level)
    • Fundamentals of Operations Research (Bc. level)
    • Fundamentals of Optimization (Bc. level)
    Spring 2016
    • Nonlinear Optimization (Msc. level)
    • Optimization Methods in Finance (Msc. level)
    • Modern Optimization Methods for Big Data Problems (Msc. level - also lectured two times)
    Fall 2015
    • Introduction to Research in Data Science (PhD. level)
    • Numerical Linear Algebra and Applications (Bc. level)
    Spring 2015
    • Mathematics for Science and Engineering (Bc. level)
    • Optimization Methods in Finance (Msc. level)
    Fall 2014
    • Introduction to Research in Data Science (PhD. level)
  • 2016
    Bratislava

    Lecturer

    FMFI UK / Trojsten

    I prepared and lectured a week-long course on Mathematics in Machine Learning. The course took 20 hours of lectures + 10 hours of tutorials. We had 25 full-time participants.
  • 2011
    NOW
    Bratislava

    Organizer

    TROJSTEN

    Trojsten is a Slovak NGO working with high-school children talented in Mathematics, Physics and Computer Science. We are annually organizing competitions, which consists of selecting problems and marking the solutions. Also, we organize week-long camps for the most successful participants. In total I helped selecting problems and marking solutions for 20+ series, organized 5+ camps and gave 20+ talks to high-school children.
  • VOLUNTEER
  • 2014
    2015
    Edinburgh

    Treasurer at the Edinburgh Chapter

    SOCIETY FOR INDUSTRIAL AND APPLIED MATHEMATICS

    I was responsible for the finance and as the whole chapter we jointly organised a few events.
  • SCHOLARSHIPS & GRANTS
  • July 2015
    Lille

    TRAVEL FUND

    INTERNATIONAL CONFERENCE ON MACHINE LEARNING

    Covering costs for ICML 2015 in Lille, France.
  • April 2015
    Edinburgh

    RESEARCH AND DEVELOPMENT FUND

    UNIVERSITY OF EDINBURGH

    Covering costs for Machine Learning Summer School 2015 in Tuebingen, Germany.
  • Nov 2014
    Edinburgh

    RESEARCH AND DEVELOPMENT FUND

    UNIVERSITY OF EDINBURGH

    Covering travel costs for SIAM CSE 2015 in Salt Lake City.
  • 2014
    2018
    Edinburgh

    PRINCIPAL'S CAREER DEVELOPMENT SCHOLARSHIP

    UNIVERSITY OF EDINBURGH

    Highly competitive scholarship awarded just to several students across the university.
  • 2011
    2014
    Bratislava

    AWARD FOR ACADEMIC EXCELLENCE

    COMENIUS UNIVERSITY

    Awarded altogether 6 times out of 6 possible.
  • ACHIEVEMENTS
  • May 2014
    Edinburgh

    BEST CONTRIBUTION AWARD (2nd PLACE)

    OPTIMIZATION AND BIG DATA 2015

    Awarded for the work: Stochastic Dual Coordinate Ascent with Adaptive Probabilities. Award Committee: Prof. Arkadi Nemirovski (Georgia Institute of Technology), Dr Rodolphe Jenatton (Amazon Berlin)
  • May 2014
    ÚSTÍ NAD LABEM

    BEST STUDENT WORK

    INTERNATIONAL STUDENT SCIENCE CONFERENCE

    Awarded in the category "Applied computer science" for a work on Fingerprint Recognition based on my bachelor thesis.
  • July 2012
    Blagoevgrad

    3rd PRIZE

    INTERNATIONAL MATHEMATICAL COMPETITION

    Individual mathematical competition for students in their first 4 years of University study.
  • Since 2012

    INTERNATIONAL MASTER

    CHESS

    The second highest chess title, which is awarded for lifetime.
  • Sept 2010
    Strečno

    HONOURABLE MENTION

    MIDDLE-EUROPEAN MATHEMATICAL OLYMPIAD

    Individual and team mathematical competition for high-school children with participants from 10 European countries.
.03

SKILLS & INTERESTS

COMPUTER SKILLS
SCIENTIFIC COMPUTING
JuliaMATLABPythonC++R
DOCUMENTS
LaTeXPowerPoint
LANGUAGES
SLOVAK
Mother tongueProfessional Efficiency
CZECH
Mother tongueProfessional Efficiency
ENGLISH
Professional Efficiency
HUNGARIAN
Mother tongueFluent
GERMAN
Basic knowledge
ACADEMIC INTERESTS
OPTIMIZATION
StochasticConvexOnline
MACHINE LEARNING
Reinforcement LearningStatistical Learning
TEACHING
MathematicsMachine Learning
OTHER INTERESTS
(MATHEMATICAL) BOARD GAMES
ChessThrough the Ages
SPORTS
Ultimate FrisbeeVolleyballRunning
READING
Steven EriksonNeil GaimanTerry Pratchet
.04

PAPERS

9 SEP 2017

Global Convergence of Arbitrary-Block Gradient Methods for Generalized Polyak-Lojasiewicz Functions

Edinburgh, UK

This paper is submitted.

ArXiv Dominik Csiba, Peter Richtárik

Global Convergence of Arbitrary-Block Gradient Methods for Generalized Polyak-Lojasiewicz Functions

Dominik Csiba, Peter Richtárik ArXiv

In this paper we introduce two novel generalizations of the theory for gradient descent type methods in the proximal setting. First, we introduce the proportion function, which we further use to analyze all known (and many new) block-selection rules for block coordinate descent methods under a single framework. This framework includes randomized methods with uniform, non-uniform or even adaptive sampling strategies, as well as deterministic methods with batch, greedy or cyclic selection rules. Second, the theory of strongly-convex optimization was recently generalized to a specific class of non-convex functions satisfying the so-called Polyak-{\L}ojasiewicz condition. To mirror this generalization in the weakly convex case, we introduce the Weak Polyak-{\L}ojasiewicz condition, using which we give global convergence guarantees for a class of non-convex functions previously not considered in theory. Additionally, we establish (necessarily somewhat weaker) convergence guarantees for an even larger class of non-convex functions satisfying a certain smoothness assumption only. By combining the two abovementioned generalizations we recover the state-of-the-art convergence guarantees for a large class of previously known methods and setups as special cases of our general framework. Moreover, our frameworks allows for the derivation of new guarantees for many new combinations of methods and setups, as well as a large class of novel non-convex objectives. The flexibility of our approach offers a lot of potential for future research, as a new block selection procedure will have a convergence guarantee for all objectives considered in our framework, while a new objective analyzed under our approach will have a whole fleet of block selection rules with convergence guarantees readily available.

8 JUNE 2016

Online optimization and regret guarantees for non-additive long-term constraints

Amazon Berlin, Germany

This paper is submitted.

ArXiv Rodolphe Jenatton, Jim Huang, Dominik Csiba, Cedric Archambeau

Online optimization and regret guarantees for non-additive long-term constraints

>Rodolphe Jenatton, Jim Huang, Dominik Csiba, Cedric Archambeau ArXiv

We consider online optimization in the 1-lookahead setting, where the objective does not decompose additively over the rounds of the online game. The resulting formulation enables us to deal with non-stationary and/or long-term constraints , which arise, for example, in online display advertising problems. We propose an on-line primal-dual algorithm for which we obtain dynamic cumulative regret guarantees. They depend on the convexity and the smoothness of the non-additive penalty, as well as terms capturing the smoothness with which the residuals of the non-stationary and long-term constraints vary over the rounds. We conduct experiments on synthetic data to illustrate the benefits of the non-additive penalty and show vanishing regret convergence on live traffic data collected by a display advertising platform in production.

29 MAY 2016

Coordinate Descent Faceoff: Primal or Dual?

Edinburgh, UK

This paper is submitted.

ArXiv Dominik Csiba, Peter Richtárik

Coordinate Descent Faceoff: Primal or Dual?

Dominik Csiba, Peter Richtárik ArXiv

Randomized coordinate descent (RCD) methods are state-of-the-art algorithms for training linear predictors via minimizing regularized empirical risk. When the number of examples (n) is much larger than the number of features (d), a common strategy is to apply RCD to the dual problem. On the other hand, when the number of features is much larger than the number of examples, it makes sense to apply RCD directly to the primal problem. In this paper we provide the first joint study of these two approaches when applied to L2-regularized ERM. First, we show through a rigorous analysis that for dense data, the above intuition is precisely correct. However, we find that for sparse and structured data, primal RCD can significantly outperform dual RCD even if d≪n, and vice versa, dual RCD can be much faster than primal RCD even if n≪d. Moreover, we show that, surprisingly, a single sampling strategy minimizes both the (bound on the) number of iterations and the overall expected complexity of RCD. Note that the latter complexity measure also takes into account the average cost of the iterations, which depends on the structure and sparsity of the data, and on the sampling strategy employed. We confirm our theoretical predictions using extensive experiments with both synthetic and real data sets.

9 FEB 2016

Importance Sampling for Minibatches

Edinburgh, UK

This paper is submitted.

ArXiv Dominik Csiba, Peter Richtárik

Importance Sampling for Minibatches

Dominik Csiba, Peter Richtárik ArXiv

Minibatching is a very well studied and highly popular technique in supervised learning, used by practitioners due to its ability to accelerate training through better utilization of parallel processing power and reduction of stochastic variance. Another popular technique is importance sampling – a strategy for preferential sampling of more important examples also capable of accelerating the training process. However, despite considerable effort by the community in these areas, and due to the inherent technical difficulty of the problem, there is no existing work combining the power of importance sampling with the strength of minibatching. In this paper we propose the first importance sampling for minibatches and give simple and rigorous complexity analysis of its performance. We illustrate on synthetic problems that for training data of certain properties, our sampling can lead to several orders of magnitude improvement in training time. We then test the new sampling on several popular datasets, and show that the improvement can reach an order of magnitude.

6 JUL 2015

Stochastic Dual Coordinate Ascent with Adaptive Probabilities (AdaSDCA)

Edinburgh, UK

This paper was published in the Proceedings of ICML 2015. Best Contribution Award (2nd Place) in Optimization and Big Data 2015, Edinburgh.

Conferences Dominik Csiba, Zheng Qu, Peter Richtárik

Stochastic Dual Coordinate Ascent with Adaptive Probabilities

Dominik Csiba, Zheng Qu, Peter Richtárik Conferences

This paper introduces AdaSDCA: an adaptive variant of stochastic dual coordinate ascent (SDCA) for solving the regularized empirical risk minimization problems. Our modification consists in allowing the method adaptively change the probability distribution over the dual variables throughout the iterative process. AdaSDCA achieves provably better complexity bound than SDCA with the best fixed probability distribution, known as importance sampling. However, it is of a theoretical character as it is expensive to implement. We also propose AdaSDCA+: a practical variant which in our experiments outperforms existing non-adaptive methods.

7 JUN 2015

Primal Method for ERM with Flexible Mini-batching Schemes and Non-convex Losses (dfSDCA)

Edinburgh, UK

This paper is submitted.

ArXiv Dominik Csiba, Peter Richtárik

Primal Method for ERM with Flexible Mini-batching Schemes and Non-convex Losses

Dominik Csiba, Peter Richtárik ArXiv

In this work we develop a new algorithm for regularized empirical risk minimization. Our method extends recent techniques of Shalev-Shwartz [02/2015], which enable a dual-free analysis of SDCA, to arbitrary mini-batching schemes. Moreover, our method is able to better utilize the information in the data defining the ERM problem. For convex loss functions, our complexity results match those of QUARTZ, which is a primal-dual method also allowing for arbitrary mini-batching schemes. The advantage of a dual-free analysis comes from the fact that it guarantees convergence even for non-convex loss functions, as long as the average loss is convex. We illustrate through experiments the utility of being able to design arbitrary mini-batching schemes.

.05

PRESENTATIONS

.06

RESEARCH GROUP

We focus mainly on Big Data Optimization and its applications in Machine Learning.

PETER RICHTÁRIK

Assistant Professor

Peter is my supervisor since 2014 and an Assistant Professor of Optimization at the University of Edinburgh since 2009.

JAKUB KONEČNÝ

PhD Student

Jakub is a PhD student under the supervision of Peter since August 2013.

ROBERT MANSEL GOWER

PhD Student

Robert is a PhD student under the supervision of Peter since May 2015.

NICOLAS LOIZOU

PhD Student

Nicolas is a PhD student under the supervision of Peter since September 2015.

.07

CONTACT

Get in touch

by sending me a message to
dominik at operam.com