Compare across your model iterations efficiently with rich visualizations to identify your champion model

At Comet.ml, we believe that machine learning should be highly iterative, collaborative, and reproducible.

Comet.ml allows data science teams to automatically track their datasets, code changes, experimentation history and models creating efficiency, transparency, and reproducibility. One of our most popular features have been our live experiment tracking charts — with Project Visualizations, we’ve extended our rich visualizations across experiments to help users compare across many experiment iterations.


Introducing Project Visualizations

Project Visualizations were born out of a user need for both focus and context during model iteration.

Focus — How do I identify my best performing (champion model) quickly among 1,000 runs? Which specific hyperparameter set + model configuration gave me the highest accuracy?

Context—In my 1,000 runs, what kind of parameter space was I searching in? Should I expand it to incorporate more + different learning rates, epochs, batch sizes, etc…? When I share these results with my manager, how do I provide enough background on the approaches I tried to arrive at this best result?

To answer these questions and to build truly robust machine learning models, data scientists and machine learning engineers need to:

  • have a consistent and thorough record of past work
  • have time and resources for faster iteration cycles
  • easily share results and collaborate to generate insights

Project Visualizations makes all of those goals and much more possible. See what kind of visualizations are available and an example on how to generate them below 👇🏼


A Wide Range of Visualization Options

Project-level visualizations allow you to compare, explore, and analyze across all your Machine Learning experiments. The visualization options we provide include:

  1. Line Charts — compare and detect differences in your models’ training process.
Each line denotes the training loss in a single experiment. In Comet.ml, you can see this loss metric live as your model is training — especially useful for long-running jobs.

2. Bar Charts — easily identify top performing models.

Track major or minor differences in your experiments’ key metrics with the bar chart.

3. Parallel Coordinates Chart — visualize an n-dimensional parameter space. Incredibly useful to explore your hyper parameter space and to identify the most effective parameter combination.

Sweep over certain ranges of your target variable (in this example, validation accuracy) to see the specific hyperparameter values that produced the result you’re looking for. For example, in the figure above we see that the best performing models used a CNN kernel size of 5.

Enjoyed this article? Here are some other articles you might find interesting:

Posted by:Cecelia Shao

Product Lead @ Comet.ml Comet is doing for Machine Learning what GitHub did for software. We allow data science teams to automatically track their datasets, code changes, experimentation history and production models creating efficiency, transparency, and reproducibility. Learn more at www.comet.ml

Leave a Reply

Your email address will not be published. Required fields are marked *