Inferscope Documentation
This is the documentation for the Inferscope project.
What is Inferscope?
Inferscope is a platform for tracking and visualizing machine learning experiments. We've built Inferscope with the following idea in mind: looking at the data both used in training and produced by the model is crucial for understanding the model and for debugging. And comparing such data side by side is the best way to understand the model, especially when new models and agents have emergent properties that are hard to evaluate with traditional metrics (but they definitely should be evaluated with metrics). Inferscope is a tool that helps both with metrics comparison and with per sample manual analysis.
How Inferscope works?
Like in any MLOps platform, user can log data related to some run - metrics, artifacts, such as images, videos, text, agent traces. By run, we understand any ML task: model training, model evaluation on some dataset, etc. and all data related to this task. We support storing data in our internal storage, or by storing publicly available URLs to the data. We have
Installation
Then go to token page and get your token.Quickstart
from inferscope import Client, Run, ModelDescription, DatasetDescription
# you can get token from https://app.inferscope.tech/token
client = Client(token='your_token')
run = Run(
client=client,
name='my_run',
metrics={"accuracy": 0.95, "cross_entropy": 0.34},
model=ModelDescription(
name='resnet18',
description='Classic ResNet18 model',
),
dataset=DatasetDescription(name='cifar10'),
tags=["classification", "some_tag"],
)
run.log_image(
name='my_image',
image_path='path/to/image.jpg',
)
run.log_video(
name='my_video',
video_path='path/to/video.mp4',
)
run.commit()
To access your run, from UI you can go to runs page and find your run there. To get run in Python use this sample: