Tips for Use
We've compiled a list of tips to make sure you get the most out of using Junior.
Large Language Models (LLMs) are probabilistic, not deterministic
Traditional software: rule-based. Generative text follows probability (next word prediction).
If you run the 'cleaning' process or KTA generation another time, a slightly different version of output is created. If you ask JuniorGPT the same question twice, the same will happen. This is a feature of these AI models, not a bug.
LLMs can 'hallucinate'
Hallucination occurs when LLMs very confidently generate incorrect answers.
We have implemented various technical methods and UX / UI features to reduce and mitigate this. Part of this are the Audit icons everywhere in the application where we show you output from Junior. Take advantage of these to quality control his work!
Ultimately you are the last line of defense. Double check everything!
LLMs can oversummarise
Junior tends to omit anecdotes and factoids and focus more on general arguments and trends.
Be aware of this in your transcript reviews and add back relevant pieces to the Cleaned Transcript.
If you're lost, navigate using CTRL + K to take you to help you bounce around or go to the Help docs.
Last updated
Was this helpful?