Aim is an open-source, self-hosted ML experiment tracking tool designed to handle 10,000s of training runs.
Aim provides a performant and beautiful UI for exploring and comparing training runs.
Additionally, its SDK enables programmatic access to tracked metadata — perfect for automations and Jupyter Notebook analysis.
from aim import Run
# Initialize a new run
run = Run()
# Log run parameters
run["hparams"] = {
"learning_rate": 0.001,
"batch_size": 32,
}
# Log metrics
for i in range(10):
run.track(i, name='loss', step=i, context={ "subset":"train" })
run.track(i, name='acc', step=i, context={ "subset":"train" })
See the full list of supported trackable objects(e.g. images, text, etc) here.
3. Run the training as usual and start Aim UI
aim up
Learn more
Migrate from other tools
Aim has built-in converters to easily migrate logs from other tools.
These migrations cover the most common usage scenarios.
In case of custom and complex scenarios you can use Aim SDK to implement your own conversion script.
Aim Python SDK empowers you to query and access any piece of tracked metadata with ease.
from aim import Repo
my_repo = Repo('/path/to/aim/repo')
query = "metric.name == 'loss'" # Example query
# Get collection of metrics
for run_metrics_collection in my_repo.query_metrics(query).iter_runs():
for metric in run_metrics_collection:
# Get run params
params = metric.run[...]
# Get metric values
steps, metric_values = metric.values.sparse_numpy()
Set up a centralized tracking server
Aim remote tracking server allows running experiments in a multi-host environment and collect tracked data in a centralized location.
Order of magnitude faster training run comparison with Aim
The tracked params are first class citizens at Aim. You can search, group, aggregate via params - deeply explore all the tracked data (metrics, params, images) on the UI.
With tensorboard the users are forced to record those parameters in the training run name to be able to search and compare. This causes a super-tedius comparison experience and usability issues on the UI when there are many experiments and params. TensorBoard doesn’t have features to group, aggregate the metrics
Scalability
Aim is built to handle 1000s of training runs - both on the backend and on the UI.
TensorBoard becomes really slow and hard to use when a few hundred training runs are queried / compared.
Beloved TB visualizations to be added on Aim
Embedding projector.
Neural network visualization.
MLflow vs Aim
MLFlow is an end-to-end ML Lifecycle tool.
Aim is focused on training tracking.
The main differences of Aim and MLflow are around the UI scalability and run comparison features.
Aim and MLflow are a perfect match - check out the aimlflow - the tool that enables Aim superpowers on Mlflow.
Run comparison
Aim treats tracked parameters as first-class citizens. Users can query runs, metrics, images and filter using the params.
MLFlow does have a search by tracked config, but there are no grouping, aggregation, subplotting by hyparparams and other comparison features available.
UI Scalability
Aim UI can handle several thousands of metrics at the same time smoothly with 1000s of steps. It may get shaky when you explore 1000s of metrics with 10000s of steps each. But we are constantly optimizing!
MLflow UI becomes slow to use when there are a few hundreds of runs.
Weights and Biases vs Aim
Hosted vs self-hosted
Weights and Biases is a hosted closed-source MLOps platform.
Aim is self-hosted, free and open-source experiment tracking tool.
In case you’ve found Aim helpful in your research journey, we’d be thrilled if you could acknowledge Aim’s contribution:
@software{Arakelyan_Aim_2020,
author = {Arakelyan, Gor and Soghomonyan, Gevorg and {The Aim team}},
doi = {10.5281/zenodo.6536395},
license = {Apache-2.0},
month = {6},
title = {{Aim}},
url = {https://github.com/aimhubio/aim},
version = {3.9.3},
year = {2020}
}
Contributing to Aim
Considering contibuting to Aim?
To get started, please take a moment to read the CONTRIBUTING.md guide.
Join Aim contributors by submitting your first pull request. Happy coding! 😊
An easy-to-use & supercharged open-source experiment tracker
Aim logs your training runs and any AI Metadata, enables a beautiful UI to compare, observe them and an API to query them programmatically.AimStack offers enterprise support that's beyond core Aim. Contact via hello@aimstack.io e-mail.
About • Demos • Ecosystem • Quick Start • Examples • Documentation • Community • Blog
ℹ️ About
Aim is an open-source, self-hosted ML experiment tracking tool designed to handle 10,000s of training runs.
Aim provides a performant and beautiful UI for exploring and comparing training runs. Additionally, its SDK enables programmatic access to tracked metadata — perfect for automations and Jupyter Notebook analysis.
Aim's mission is to democratize AI dev tools 🎯
🎬 Demos
Check out live Aim demos NOW to see it in action.
🌍 Ecosystem
Aim is not just an experiment tracker. It’s a groundwork for an ecosystem. Check out the two most famous Aim-based tools.
🏁 Quick start
Follow the steps below to get started with Aim.
1. Install Aim on your training environment
2. Integrate Aim with your code
See the full list of supported trackable objects(e.g. images, text, etc) here.
3. Run the training as usual and start Aim UI
Learn more
Migrate from other tools
Aim has built-in converters to easily migrate logs from other tools. These migrations cover the most common usage scenarios. In case of custom and complex scenarios you can use Aim SDK to implement your own conversion script.
Integrate Aim into an existing project
Aim easily integrates with a wide range of ML frameworks, providing built-in callbacks for most of them.
Query runs programmatically via SDK
Aim Python SDK empowers you to query and access any piece of tracked metadata with ease.
Set up a centralized tracking server
Aim remote tracking server allows running experiments in a multi-host environment and collect tracked data in a centralized location.
See the docs on how to set up the remote server.
Deploy Aim on kubernetes
Read the full documentation on aimstack.readthedocs.io 📖
🆚 Comparisons to familiar tools
TensorBoard vs Aim
Training run comparison
Order of magnitude faster training run comparison with Aim
Scalability
Beloved TB visualizations to be added on Aim
MLflow vs Aim
MLFlow is an end-to-end ML Lifecycle tool. Aim is focused on training tracking. The main differences of Aim and MLflow are around the UI scalability and run comparison features.
Aim and MLflow are a perfect match - check out the aimlflow - the tool that enables Aim superpowers on Mlflow.
Run comparison
UI Scalability
Weights and Biases vs Aim
Hosted vs self-hosted
🛣️ Roadmap
Detailed milestones
The Aim product roadmap
Backlog
contains the issues we are going to choose from and prioritize weeklyHigh-level roadmap
The high-level features we are going to work on the next few months:
In progress
Next-up
Aim UI
SDK and Storage
Integrations
Done
👥 Community
Aim README badge
Add Aim badge to your README, if you’ve enjoyed using Aim in your work:
Cite Aim in your papers
In case you’ve found Aim helpful in your research journey, we’d be thrilled if you could acknowledge Aim’s contribution:
Contributing to Aim
Considering contibuting to Aim? To get started, please take a moment to read the CONTRIBUTING.md guide.
Join Aim contributors by submitting your first pull request. Happy coding! 😊
Made with contrib.rocks.
More questions?