Riassunto analitico
This thesis presents a detailed analysis of Large Language Models (LLMs) applied to the generation of contextualized content. Starting from the neurons, the founding of artificial intelligence, and deep learning, the thesis traces the evolution process that led to the development of the transformer architecture and modern LLMs. The study focuses on the evaluation of state-of-the-art models in the generation of textual contents, such as titles, subtitles, summaries and speeches, given context like a document or an article.
Through a series of experiments this research assesses the models’ capability to comprehend and process information about the context, to generate coherent and relevant content. Tests and metrics were specifically designed to enable evaluations of the models, which are subjected to the same inputs and whose results are analysed according to criteria of relevance, coherence, and linguistic quality. The comparative study highlights the strengths and limitations of each model in handling specific aspects of content generation.
The conclusions derived from this study contribute to a deeper understanding of the practical applications of LLMs in automated content creation. The results of this analysis were used to integrate an LLM into an automated process of contextualized content generation. This study led to the selection of the model to implement this feature, enabling the production of textual content like those previously listed, with parameters for length, tone, and language.
|