Building Sentient from Source
This page contains a detailed process for building and running Sentient from the source code on our GitHub repo.
Last updated
Was this helpful?
This page contains a detailed process for building and running Sentient from the source code on our GitHub repo.
Last updated
Was this helpful?
Get started with Sentient today:
The following instructions are for Linux-based machines, but they remain fundamentally the same for Windows & Mac. Only things like venv configs and activations change on Windows, the rest of the process is pretty much the same.
Clone the project
Go to the project directory
Install the following to start contributing to Sentient:
npm: The ElectronJS frontend of the Sentient desktop app uses npm as its package manager.
Install the latest version of NodeJS and npm from
After that, install all the required packages.
python: Python will be needed to run the backend. Install Python Internal testing has been carried out with Python 3.11 and so, we recommend Python 3.11.
After that, you will need to create a virtual environment and install all required packages. This venv will need to be activated whenever you want to run the Python server (backend).
⚠️ If you get a numpy dependency error while installing the requirements, first install the requirements with the latest numpy version (2.x). After the installation of requirements completes, install a numpy 1.x version (backend has been tested and works successfully on numpy 1.26.4) and you will be ready to go. This is probably not the best practise, but this works for now.
⚠️ If you intend to use Advanced Voice Mode, you MUST download and install llama-cpp-python with CUDA support (if you have an NVIDIA GPU) using the commented out pip command in the requirements.txt file. Otherwise, simply download and install the llama-cpp-python package with pip for simple CPU-only support. This line is commented out in the requirements file to allow users to download and install the appropriate version based on their preference (CPU only/GPU accelerated).
Ollama: Download and install the latest version of Ollama
After that, pull the model you wish to use from Ollama. For example,
⚠️ By default, the backend is configured with Llama 3.2 3B. We found this SLM to be really versatile and works really well for our usage, as compared to other SLMs. However a lot of new SLMs like Cogito are being dropped everyday so we will probably be changing the model soon. If you wish to use a different model, simply find all the places where llama3.2:3b has been set in the Python backend scripts and change it to the tag of the model you have pulled from Ollama.
Neo4j Community: Download Neo4j Community Edition
Next, you will need to enable the APOC plugin.
After extracting Neo4j Community Edition, navigate to the labs folder. Copy the apoc-x.x.x-core.jar
script to the plugins folder in the Neo4j folder.
Edit the neo4j.conf file to allow the use of APOC procedures:
Uncomment or add the following lines:
You can run Neo4j community using the following commands
While Neo4j is running, you can visit http://localhost:7474/
to run Cypher Queries and interact with your knowledge graph.
⚠️ On your first run of Neo4j Community, you will need to set a username and password. **Remember this password** as you will need to add it to the .env file on the Python backend.
Download the Voice Model (Orpheus TTS 3B)
For using Advanced Voice Mode, you need to manually download from Huggingface. Whisper is automatically downloaded by Sentient via fasterwhisper.
The model linked above is a Q4 quantization of the Orpheus 3B model. If you have even more VRAM at your disposal, you can go for the .
Download the GGUF files - these models are run using llama-cpp-python.
Place the model files here:src/server/voice/models
and ensure that the correct model name is set in the Python scripts on the backend. By default, the app is configured to use the 8-bit quant using the same name that it has when you download it from HuggingFace.
⚠️ If you do not have enough VRAM and voice mode is not that important to you, you can comment out/remove the voice mode loading functionality in the main app.py located at src/server/app/app.py
For the Electron Frontend, you will need to create a .env
file in the src/interface
folder. Populate that .env
file with the following variables (examples given).
For the Python Backend, you will need to create a .env
file and place it in the src/model
folder. Populate that .env
file with the following variables (examples given).
⚠️ If you face some issues with Auth0 setup, please contact us via our Whatsapp Group or reach out to one of the lead contributors [@Kabeer2004](https://github.com/Kabeer2004), [@itsskofficial](https://github.com/itsskofficial) or [@abhijeetsuryawanshi12](https://github.com/abhijeetsuryawanshi12)
Install dependencies
Ensure that you have installed all the dependencies as outlined in the Prerequisites Section.
Start Neo4j
Start Neo4j Community Edition first.
Start the Python backend server.
Once the Python server has fully started up, start the Electron client.
❗ You are free to package and bundle your own versions of the app that may or may not contain any modifications. However, if you do make any modifications, you must comply with the AGPL license and open-source your version as well.
You will need the following environment variables to run the project locally. For sensitive keys like Auth0, GCP, Brave Search you can create your own accounts and populate your own keys or comment in the discussion titled if you want pre-setup keys