Some privacy related extensions may cause issues on x.com. Please disable them and try again.| X (formerly Twitter)
Import AI publishes first on Substack – subscribe here. Cambridge researchers show how to use distributed training to make a 1.3bn parameter LLM:…More evidence that distributed training works well …| Import AI
We’re on a journey to advance and democratize artificial intelligence through open source and open science.| huggingface.co
There are widespread fears that conversational AI could soon exert unprecedented influence over human beliefs. Here, in three large-scale experiments (N=76,977), we deployed 19 LLMs-including some post-trained explicitly for persuasion-to evaluate their persuasiveness on 707 political issues. We then checked the factual accuracy of 466,769 resulting LLM claims. Contrary to popular concerns, we show that the persuasive power of current and near-future AI is likely to stem more from post-traini...| arXiv.org
How can large language models (LLMs) serve users with varying preferences that may conflict across cultural, political, or other dimensions? To advance this challenge, this paper establishes four key results. First, we demonstrate, through a large-scale multilingual human study with representative samples from five countries (N=15,000), that humans exhibit significantly more variation in preferences than the responses of 21 state-of-the-art LLMs. Second, we show that existing methods for pref...| arXiv.org