Für meine Versuche mit Ollama, einem Tool für lokale Large Language Models (LLMs), habe ich mal die Qualität eines Modells daran gemessen, was es über mich weiß. Ich nenne das vanity prompting, in Anlehnung an den bekannten Begriff vanity search.

Ich habe ausprobiert, was in dem Tutorial von LangChain zu Retrieval steht. Die Frage “Who is Axel Dürkop?” wird mit Mistral vollkommen unzureichend beantwortet:

Axel Dürkop is a German author, journalist, and public speaker. He was born on October 25, 1978, in Schwerin, Germany. Dürkop is best known for his works related to digitalization, technology, and society. He has written several books on these topics, including “Digitalisierung: Chancen und Risken” (Digitalization: Opportunities and Risks), which was published in 2014. Dürkop frequently appears as a commentator on technology and digitalization-related issues in German media outlets.

Wenn ich hingegen meine englische About-Seite vorher lade, kommt Folgendes heraus:

Axel Dürkop is a philosopher, computer science teacher, and research assistant at Hamburg University of Technology. He completed his studies in philosophy and German language and literature in Hamburg and has worked as a director, musician, and actor for ten years in German city and state theaters. For nearly twenty years, he has been teaching computer science from both a technical and philosophical perspective. His current research focus is on openness in research, teaching, and society. He has received several awards and nominations for his teaching and projects. Axel Dürkop also maintains a blog where he writes about various topics of interest to him. You can contact him through his website, email, Mastodon, GitLab, GitHub, Vimeo, Unsplash, ORCID, Zotero, Twitter, Twitch, and XING social media accounts. His blog content is available under the Creative Commons Attribution 4.0 International License.

Das wiederum entspricht den Tatsachen. Einen Prompt, mit dem ich danach frage, was ich vor der Arbeit an der Universität gemacht habe, wird ebenfalls richtig beantwortet. Leider finde ich gerade die Doku zu dieser Frage nicht mehr.

Der Aufwand des Experiments bestand in nicht mehr als dem Durcharbeiten des Tutorials und der Wartezeit, bis Ollama seine Antwort generiert hat (X 230).