# Tips for Use

We've compiled a list of tips to make sure you get the most out of using Junior.

**Large Language Models (LLMs) are probabilistic, not deterministic**

* Traditional software: rule-based. Generative text follows probability (next word prediction).
* If you run the 'cleaning' process or KTA generation another time, a slightly different version of output is created. If you ask JuniorGPT the same question twice, the same will happen. This is a feature of these AI models, not a bug.

**LLMs can 'hallucinate'**

* Hallucination occurs when LLMs very confidently generate incorrect answers.&#x20;
* We have implemented various technical methods and UX / UI features to reduce and mitigate this. Part of this are the Audit icons everywhere in the application where we show you output from Junior. Take advantage of these to quality control his work!
* Ultimately you are the last line of defense. Double check everything!

**LLMs can oversummarise**

* Junior tends to omit anecdotes and factoids and focus more on general arguments and trends.
* Be aware of this in your transcript reviews and add back relevant pieces to the Cleaned Transcript.

If you're lost, navigate using CTRL + K to take you to help you bounce around or go to the Help docs.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.myjunior.ai/juniors-limitations-and-tips-for-use/tips-for-use.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
