LogoLogo
  • Sentient
  • Getting Started
    • Welcome to Sentient
    • Building Sentient from Source
    • Troubleshooting Installation Issues
    • Onboarding Process
  • Using Sentient
    • Sentient's Interface
    • Chatting with Sentient
    • Sentient's Memory Pipeline
    • Agents in Sentient
  • Reset/Uninstall
    • Reset Sentient
    • Uninstall Sentient
Powered by GitBook
On this page

Was this helpful?

  1. Getting Started

Building Sentient from Source

This page contains a detailed process for building and running Sentient from the source code on our GitHub repo.

PreviousWelcome to SentientNextTroubleshooting Installation Issues

Last updated 3 days ago

Was this helpful?

Get started with Sentient today:

Prerequisites

The following instructions are for Linux-based machines, but they remain fundamentally the same for Windows & Mac. Only things like venv configs and activations change on Windows, the rest of the process is pretty much the same.

Clone the project

  git clone https://github.com/existence-master/Sentient.git

Go to the project directory

  cd Sentient

Install the following to start contributing to Sentient:

  • npm: The ElectronJS frontend of the Sentient desktop app uses npm as its package manager.

    Install the latest version of NodeJS and npm from

    After that, install all the required packages.

     cd ./src/client && npm install
  • python: Python will be needed to run the backend. Install Python Internal testing has been carried out with Python 3.11 and so, we recommend Python 3.11.

    After that, you will need to create a virtual environment and install all required packages. This venv will need to be activated whenever you want to run the Python server (backend).

     cd src/server && python3 -m venv venv
     cd venv/bin && source activate
     cd ../../ && pip install -r requirements.txt

    ⚠️ If you get a numpy dependency error while installing the requirements, first install the requirements with the latest numpy version (2.x). After the installation of requirements completes, install a numpy 1.x version (backend has been tested and works successfully on numpy 1.26.4) and you will be ready to go. This is probably not the best practise, but this works for now.

    ⚠️ If you intend to use Advanced Voice Mode, you MUST download and install llama-cpp-python with CUDA support (if you have an NVIDIA GPU) using the commented out pip command in the requirements.txt file. Otherwise, simply download and install the llama-cpp-python package with pip for simple CPU-only support. This line is commented out in the requirements file to allow users to download and install the appropriate version based on their preference (CPU only/GPU accelerated).

  • Ollama: Download and install the latest version of Ollama

    After that, pull the model you wish to use from Ollama. For example,

     ollama pull llama3.2:3b

    ⚠️ By default, the backend is configured with Llama 3.2 3B. We found this SLM to be really versatile and works really well for our usage, as compared to other SLMs. However a lot of new SLMs like Cogito are being dropped everyday so we will probably be changing the model soon. If you wish to use a different model, simply find all the places where llama3.2:3b has been set in the Python backend scripts and change it to the tag of the model you have pulled from Ollama.

  • Neo4j Community: Download Neo4j Community Edition

    Next, you will need to enable the APOC plugin. After extracting Neo4j Community Edition, navigate to the labs folder. Copy the apoc-x.x.x-core.jar script to the plugins folder in the Neo4j folder. Edit the neo4j.conf file to allow the use of APOC procedures:

    sudo nano /etc/neo4j/neo4j.conf

    Uncomment or add the following lines:

    dbms.security.procedures.unrestricted=apoc.*
    dbms.security.procedures.allowlist=apoc.*
    dbms.unmanaged_extension_classes=apoc.export=/apoc

    You can run Neo4j community using the following commands

      cd neo4j/bin && ./neo4j console

    While Neo4j is running, you can visit http://localhost:7474/ to run Cypher Queries and interact with your knowledge graph.

    ⚠️ On your first run of Neo4j Community, you will need to set a username and password. **Remember this password** as you will need to add it to the .env file on the Python backend.

  • Download the Voice Model (Orpheus TTS 3B)

    For using Advanced Voice Mode, you need to manually download from Huggingface. Whisper is automatically downloaded by Sentient via fasterwhisper.

    The model linked above is a Q4 quantization of the Orpheus 3B model. If you have even more VRAM at your disposal, you can go for the .

    Download the GGUF files - these models are run using llama-cpp-python.

    Place the model files here:src/server/voice/models

    and ensure that the correct model name is set in the Python scripts on the backend. By default, the app is configured to use the 8-bit quant using the same name that it has when you download it from HuggingFace.

    ⚠️ If you do not have enough VRAM and voice mode is not that important to you, you can comment out/remove the voice mode loading functionality in the main app.py located at src/server/app/app.py

🔒: Environment Variables (Contributors)

For the Electron Frontend, you will need to create a .env file in the src/interface folder. Populate that .env file with the following variables (examples given).

  ELECTRON_APP_URL= "http://localhost:3000"
  APP_SERVER_URL= "http://127.0.0.1:5000"
  APP_SERVER_LOADED= "false"
  APP_SERVER_INITIATED= "false"
  NEO4J_SERVER_URL= "http://localhost:7474"
  NEO4J_SERVER_STARTED= "false"
  BASE_MODEL_REPO_ID= "llama3.2:3b"
  AUTH0_DOMAIN = "abcdxyz.us.auth0.com"
  AUTH0_CLIENT_ID = "abcd1234"

For the Python Backend, you will need to create a .env file and place it in the src/model folder. Populate that .env file with the following variables (examples given).

  NEO4J_URI=bolt://localhost:7687
  NEO4J_USERNAME=neo4j
  NEO4J_PASSWORD=abcd1234
  EMBEDDING_MODEL_REPO_ID=sentence-transformers/all-MiniLM-L6-v2
  BASE_MODEL_URL=http://localhost:11434/api/chat
  BASE_MODEL_REPO_ID=llama3.2:3b
  LINKEDIN_USERNAME=email@address.com
  LINKEDIN_PASSWORD=password123
  BRAVE_SUBSCRIPTION_TOKEN=YOUR_TOKEN_HERE
  BRAVE_BASE_URL=https://api.search.brave.com/res/v1/web/search
  GOOGLE_CLIENT_ID=YOUR_GOOGLE_CLIENT_ID_HERE
  GOOGLE_PROJECT_ID=YOUR_PROJECT_ID
  GOOGLE_AUTH_URI=https://accounts.google.com/o/oauth2/auth
  GOOGLE_TOKEN_URI=https://oauth2.googleapis.com/token
  GOOGLE_AUTH_PROVIDER_CERT_URL=https://www.googleapis.com/oauth2/v1/certs
  GOOGLE_CLIENT_SECRET=YOUR_SECRET_HERE
  GOOGLE_REDIRECT_URIS=http://localhost
  AES_SECRET_KEY=YOUR_SECRET_KEY_HERE (256 bits or 32 chars)
  AES_IV=YOUR_IV_HERE (256 bits or 32 chars)
  AUTH0_DOMAIN=abcdxyz.us.auth0.com
  AUTH0_MANAGEMENT_CLIENT_ID=YOUR_MANAGEMENT_CLIENT_ID
  AUTH0_MANAGEMENT_CLIENT_SECRET=YOUR_MANAGEMENT_CLIENT_SECRET

⚠️ If you face some issues with Auth0 setup, please contact us via our Whatsapp Group or reach out to one of the lead contributors [@Kabeer2004](https://github.com/Kabeer2004), [@itsskofficial](https://github.com/itsskofficial) or [@abhijeetsuryawanshi12](https://github.com/abhijeetsuryawanshi12)

:running: Run Locally (Contributors)

Install dependencies

Ensure that you have installed all the dependencies as outlined in the Prerequisites Section.

Start Neo4j

Start Neo4j Community Edition first.

cd neo4j/bin && ./neo4j console

Start the Python backend server.

cd src/server/venv/bin/ && source activate
cd ../../ && python -m server.app.app

Once the Python server has fully started up, start the Electron client.

cd src/interface && npm run dev

❗ You are free to package and bundle your own versions of the app that may or may not contain any modifications. However, if you do make any modifications, you must comply with the AGPL license and open-source your version as well.

You will need the following environment variables to run the project locally. For sensitive keys like Auth0, GCP, Brave Search you can create your own accounts and populate your own keys or comment in the discussion titled if you want pre-setup keys

‼️
Sentient GitHub Repo
here.
from here.
from here.
from here.
this model
Q8 quant
'Request Environment Variables (.env) Here'