My latest paper is available on arxiv: Low Rank Factorizations are Indirect Encodings for Deep Neuroevolution. The general idea is that we can search for stronger neural networks in a gradient-free fashion by restricting search to networks of low-rank. We show that it works well for language modeling reinforcement learning tasks. It’s essentially a crossover between the following papers: LoRA: Low-Rank Adaptation of Large Language Models Deep Neuroevolution: Genetic Algorithms Are a Competi...