Skip to content

Fully Offline Version #9201

@cyrus104

Description

@cyrus104

I am using this for a homelab but this could also apply for any work location that has proprietary data. I would like to be able to grab the AIO version and have it include llama.cpp baked in.

I setup the machine offline to have docker with the nvidia-runtime. I downloaded some models and want to put them in manually into the model folder.

Right now I get a bunch of errors with the backend and model galleries and the AIO doesn't have a backend in it.

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions