No edit summary Tag: 2017 source edit |
No edit summary Tag: 2017 source edit |
||
| Line 7: | Line 7: | ||
==L== | ==L== | ||
;LLM | ;LLM | ||
:A large language model (LLM) is a language model trained with self-supervised machine learning on a vast amount of text, designed for natural language processing tasks, especially language generation. The largest and most capable LLMs are generative pre-trained transformers (GPTs) and provide the core capabilities of chatbots such as ChatGPT, Gemini and Claude. | :A large language model (LLM) is a language model trained with self-supervised machine learning on a vast amount of text, designed for natural language processing tasks, especially language generation. The largest and most capable LLMs are generative pre-trained transformers (GPTs) and provide the core capabilities of chatbots such as ChatGPT, Gemini and Claude. [https://en.wikipedia.org/wiki/Large_language_model wikipedia] | ||
== R == | == R == | ||
;RAG pipeline | ;RAG pipeline | ||
:Retrieval-augmented generation (RAG) is a technique that enables large language models (LLMs) to retrieve and incorporate new information. With RAG, LLMs do not respond to user queries until they refer to a specified set of documents. These documents supplement information from the LLM's pre-existing training data. This allows LLMs to use domain-specific and/or updated information that is not available in the training data. For example, this helps LLM-based chatbots access internal company data or generate responses based on authoritative sources. | :Retrieval-augmented generation (RAG) is a technique that enables large language models (LLMs) to retrieve and incorporate new information. With RAG, LLMs do not respond to user queries until they refer to a specified set of documents. These documents supplement information from the LLM's pre-existing training data. This allows LLMs to use domain-specific and/or updated information that is not available in the training data. For example, this helps LLM-based chatbots access internal company data or generate responses based on authoritative sources. [https://en.wikipedia.org/wiki/Retrieval-augmented_generation wikipedia] | ||
Latest revision as of 13:36, 7 October 2025
D[edit | edit source]
- display title
- The display title sets a page title that is different from the page name.
- It becomes the preferred page label used in many automatically generated page lists. It can be changed with the visual editor (Options > Advanced settings). The source of the wiki page shows the magic word {{DISPLAYTITLE:My display title}}.
L[edit | edit source]
- LLM
- A large language model (LLM) is a language model trained with self-supervised machine learning on a vast amount of text, designed for natural language processing tasks, especially language generation. The largest and most capable LLMs are generative pre-trained transformers (GPTs) and provide the core capabilities of chatbots such as ChatGPT, Gemini and Claude. wikipedia
R[edit | edit source]
- RAG pipeline
- Retrieval-augmented generation (RAG) is a technique that enables large language models (LLMs) to retrieve and incorporate new information. With RAG, LLMs do not respond to user queries until they refer to a specified set of documents. These documents supplement information from the LLM's pre-existing training data. This allows LLMs to use domain-specific and/or updated information that is not available in the training data. For example, this helps LLM-based chatbots access internal company data or generate responses based on authoritative sources. wikipedia