On-Device AI for Journalism: A Local LLM Test
Exploring the feasibility of using locally run LLMs like Gemma, Llama, and Mistral for journalistic tasks. This experiment details hardware challenges (RAM, UMA), model limitations (quantization, context), output quality issues (relevance), and the 'AI paradox' where workload shifts to prompt engineering and data prep, despite potential benefits like cost and privacy.