OpenAI's Strawberry May Enhance AI Reasoning, Optimization of Vision Language Models, Info on Machine Learning Reproducibility, and More
Weekly updates and resources 7/15/24
Top Machine Learning Resources and Updates
Here are the most important machine learning resources and updates from the past week. I share more frequent ML updates on X if you want to follow me there. You can support Society's Backend for just $1/mo to get a full list of everything I’m reading in your inbox each week. You can find last week's updates here.
Artifacts Log 2: Gemma 2, more Chinese LLMs, high quality datasets, and domain-specific training
GPipe: Efficient Training of Giant Neural Networks using Pipeline Parallelism
Tackling the Abstraction and Reasoning Corpus (ARC) with Object-centric Models and the MDL Principle
OpenAI's new AI code named 'Strawberry'
OpenAI is developing a secretive new AI model called 'Strawberry,' which aims to enhance AI reasoning and enable autonomous internet research. This project follows the previously rumored Q* model and has generated significant internal and external buzz. Additionally, three mysterious AI models have appeared in the LMSYS Chatbot Arena, hinting at potential new releases from OpenAI.
🥇Top ML Papers of the Week
I recommend reading this summary every single week because I consider myself pretty paper-savvy and I would never dare braving the current machine learning paper landscape. I use X and this summary to find meaningful ML papers and I suggest you do the same. Elvis does an excellent job.
Tackling the Abstraction and Reasoning Corpus (ARC) with Object-centric
 Models and the MDL Principle
The Abstraction and Reasoning Corpus (ARC) is a challenging benchmark,
introduced to foster AI research towards human-level intelligence. It is a
collection of unique tasks about generating colored grids, specified by a few
examples only. The goal of ARC is to create problem sets that are truly indicative of AI intelligence and cannot be trained on to skew model results. This paper takes a novel approach toward solving these problems that I think is worth the read.
Lightning-AI/litgpt: 20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.
This is an excellent repository created by LightningAI that will walk you through pre-training, fine-tuning, and deploying 20 different LLMs. It’s specifically designed to be user friendly and each LLM recipe is optimized for performance. Definitely worth checking out if you’re considering building applications with LLMs.
Why Machine Learning Systems Misbehave
This is my article from last Friday. It helps both consumers and software engineers understand why machine learning systems are difficult to work with compared to traditional software systems. There are three aspects of machine learning systems that I cover: non-determinism, a lack of interpretability, and fluidity. I also walk through some specific examples of how this differs from the traditional software systems we’re all used to.
AI's Cloudy Path to Zero Emissions, Amazon's Agent Builders, Claude's UI Advance, Training On Consumer GPUs
This is Andrew Ng’s weekly machine learning newsletter. I highly recommend his writing and paying attention to his updates because he’s not only an excellent teacher but he also communicates the importance of the current events of the AI landscape. In this issue he goes over SB 1047 which is an important topic regarding AI regulation that could stifle innovation.Â
Instant Python: Essential lessons in 5 minutes Flat!
I’ve been following Akshay for a while and highly recommend any of his educational resources. Here’s a quick summary of a Python book he recently released:
Introducing "Instant Python" – an illustrative guide designed to teach Python concepts in just 5 minutes! Perfect for beginners and intermediates alike, this eBook is your shortcut to mastering one of the most powerful programming languages in the tech world.
Preference Optimization for Vision Language Models
This blog by Hugging Face discusses optimizing vision language models (VLMs) using preference optimization. By reducing memory requirements with techniques like quantization and LoRA, large models such as Idefics2-8b can be trained on GPUs. The text provides a step-by-step guide to setting up, training, and evaluating these models. Key methods include using bfloat16 precision and gradient checkpointing to manage memory. The approach helps improve model performance and reduce hallucinations.
Artifacts Log 2: Gemma 2, more Chinese LLMs, high quality datasets, and domain-specific training
Nathan Lambert discusses recent developments in open AI models, highlighting key players like Google's Gemma 2 and Nvidia's Nemotron 340B. Gemma 2 outperforms many existing models and shows potential to replace Llama models. Several other noteworthy models include Qwen2-72B-Instruct and DeepSeek-V2-Lite from Chinese contributors. Lambert also mentions new high-quality datasets and domain-specific training efforts, emphasizing their significance for fine-tuning and specialized tasks. The article underscores the thriving ecosystem of open models and the continuous advancements in AI research and applications.
Why it's hard to make machine learning reproducible
Ensuring reproducibility is crucial for accurate, reliable results and future work. Christoph Molnar shares a stressful project experience to highlight this. Key reasons include non-deterministic model training, large datasets, and long training times. Jupyter notebooks can create hidden states and non-linear execution issues. Proprietary algorithms and APIs add more complications.
Keep reading with a 7-day free trial
Subscribe to Society's Backend to keep reading this post and get 7 days of free access to the full post archives.