Skip to content

Running Ollama with AutoGPT

Follow these steps to set up and run Ollama and your AutoGPT project:

  1. Run Ollama
  2. Open a terminal
  3. Execute the following command:
    ollama run llama3
    
  4. Leave this terminal running

  5. Run the Backend

  6. Open a new terminal
  7. Navigate to the backend directory in the AutoGPT project:
    cd autogpt_platform/backend/
    
  8. Start the backend using Poetry:

    poetry run app
    

  9. Run the Frontend

  10. Open another terminal
  11. Navigate to the frontend directory in the AutoGPT project:
    cd autogpt_platform/frontend/
    
  12. Start the frontend development server:

    npm run dev
    

  13. Choose the Ollama Model

  14. Add LLMBlock in the UI
  15. Choose the last option in the model selection dropdown