1. Installation Using pip

Solo Server is available as a Python package. Install it with:

pip install solo-server

Github Repo

GetSoloTech/solo-server

2. Set Up Your Solo Environment

Initialize your environment with:

solo setup

This command guides you through the initial configuration, ensuring that your local environment is correctly set up for building and deploying your documentation.

3. Download Your First Model

Fetch your model huggingface identifier:

solo download deepseek-ai/DeepSeek-R1

This step downloads your models locally at~/.cache/huggingface/hub using huggingface download cli interface.

4. Run Solo Server

Start the server for Hardware Aware Inference:

solo run deepseek-ai/DeepSeek-R1

This command launches a local server (port athttp://localhost:5070) where you can view your models inference in real time. As you make changes to your model deployment, the server will automatically update the preview.