Skip to content

Commit 645248c

Browse files
authored
Merge pull request #77 from gschmutz/main
update to remove OLLAMA_MODEL, which has been replaced by LOCAL_LLM in the code
2 parents b827535 + 243b200 commit 645248c

File tree

2 files changed

+10
-11
lines changed

2 files changed

+10
-11
lines changed

.env.example

Lines changed: 2 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,3 @@
1-
2-
OLLAMA_BASE_URL=http://localhost:11434 # the endpoint of the Ollama service, defaults to http://localhost:11434 if not set
3-
OLLAMA_MODEL=llama3.2 # the name of the model to use, defaults to 'llama3.2' if not set
4-
51
# Which search service to use, either 'duckduckgo', 'tavily', 'perplexity', Searxng
62
SEARCH_API='duckduckgo'
73
# For Searxng search, defaults to http://localhost:8888
@@ -13,8 +9,9 @@ PERPLEXITY_API_KEY=pplx-xxxxx # Get your key at https://www.perplexity.ai
139

1410
# LLM Configuration
1511
LLM_PROVIDER=lmstudio # Options: ollama, lmstudio
16-
LOCAL_LLM=qwen_qwq-32b # Model name in LMStudio
12+
LOCAL_LLM=qwen_qwq-32b # Model name in LMStudio/Ollama
1713
LMSTUDIO_BASE_URL=http://localhost:1234/v1 # LMStudio OpenAI-compatible API URL
14+
OLLAMA_BASE_URL=http://localhost:11434 # the endpoint of the Ollama service, defaults to http://localhost:11434 if not set
1815

1916
MAX_WEB_RESEARCH_LOOPS=3
2017
FETCH_FULL_PAGE=True

README.md

Lines changed: 8 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -39,8 +39,9 @@ ollama pull deepseek-r1:8b
3939

4040
* If set, these values will take precedence over the defaults set in the `Configuration` class in `configuration.py`.
4141
```shell
42-
OLLAMA_BASE_URL="url" # Ollama service endpoint, defaults to `http://localhost:11434`
43-
OLLAMA_MODEL=model # the model to use, defaults to `llama3.2` if not set
42+
LLM_PROVIDER=ollama
43+
OLLAMA_BASE_URL="http://localhost:11434" # Ollama service endpoint, defaults to `http://localhost:11434`
44+
LOCAL_LLM=model # the model to use, defaults to `llama3.2` if not set
4445
```
4546

4647
### Selecting local model with LMStudio
@@ -185,21 +186,22 @@ https://github.com/PacoVK/ollama-deep-researcher-ts
185186

186187
## Running as a Docker container
187188

188-
The included `Dockerfile` only runs LangChain Studio with ollama-deep-researcher as a service, but does not include Ollama as a dependant service. You must run Ollama separately and configure the `OLLAMA_BASE_URL` environment variable. Optionally you can also specify the Ollama model to use by providing the `OLLAMA_MODEL` environment variable.
189+
The included `Dockerfile` only runs LangChain Studio with local-deep-researcher as a service, but does not include Ollama as a dependant service. You must run Ollama separately and configure the `OLLAMA_BASE_URL` environment variable. Optionally you can also specify the Ollama model to use by providing the `LOCAL_LLM` environment variable.
189190

190191
Clone the repo and build an image:
191192
```
192-
$ docker build -t ollama-deep-researcher .
193+
$ docker build -t local-deep-researcher .
193194
```
194195

195196
Run the container:
196197
```
197198
$ docker run --rm -it -p 2024:2024 \
198199
-e SEARCH_API="tavily" \
199200
-e TAVILY_API_KEY="tvly-***YOUR_KEY_HERE***" \
201+
-e LLM_PROVIDER=ollama
200202
-e OLLAMA_BASE_URL="http://host.docker.internal:11434/" \
201-
-e OLLAMA_MODEL="llama3.2" \
202-
ollama-deep-researcher
203+
-e LOCAL_LLM="llama3.2" \
204+
local-deep-researcher
203205
```
204206

205207
NOTE: You will see log message:

0 commit comments

Comments
 (0)