Society's Backend

Society's Backend

Share this post

Society's Backend
Society's Backend
Open AI beats OpenAI, Understand Scaling Laws, Commerce in the Age of AI Agents, and More
Copy link
Facebook
Email
Notes
More
ML for SWEs

Open AI beats OpenAI, Understand Scaling Laws, Commerce in the Age of AI Agents, and More

Society's Backend Reading List 01-24-2025

Logan Thorneloe's avatar
Logan Thorneloe
Jan 24, 2025
∙ Paid
8

Share this post

Society's Backend
Society's Backend
Open AI beats OpenAI, Understand Scaling Laws, Commerce in the Age of AI Agents, and More
Copy link
Facebook
Email
Notes
More
2
Share

Here’s the comprehensive AI reading list for this week! This is a list of my favorite educational, topical, or interesting AI articles/videos from the past seven days. A huge thanks to all the authors of these resources and all the supporters of Society’s Backend. 😊

If you want to have continued educational resources perfect for a machine learning engineer, subscribe to get these in your inbox each week. You can get the extended reading list (~50 resources instead of 10) by supporting Society’s Backend for just $1/mo.

Get 80% off for 1 year

I’ve also written an ML roadmap to help anyone learn ML fundamentals from high-quality resources for free. You can check that out here.

What Happened Last Week

Here are some resources to learn more about what happened in AI last week and why those happenings are important:

  • Last Week in AI covers significant AI developments, including Google's partnerships for news delivery and ChatGPT's new task management feature.

  • The Weekly Kaitchup covers more in-depth ML topics such as new model releases and developments.

  • Charlie Guo’s Weekly Roundup covers media outlets partnering with AI companies due to revenue challenges and the implications of this partnership and ethical issues that have arisen from models being trained on copyrighted material.

  • Devansh’s content recommendations are always great.

Last Week's Reading List

In case you missed it, here are some highlights from last week:

Multimodal Biometric Authentication, Noteworthy AI Research Papers of 2024, 5 Common Mistakes to Avoid When Training LLMs, and more

Multimodal Biometric Authentication, Noteworthy AI Research Papers of 2024, 5 Common Mistakes to Avoid When Training LLMs, and more

Logan Thorneloe
·
Jan 17
Read full story

Reading List

How Scaling Laws Will Determine AI's Future | YC Decoded

Large language models (LLMs) are growing bigger and smarter, with performance improvements seen every six months due to scaling laws that emphasize increasing parameters, data, and compute power. Recent research suggests that simply increasing model size may not be enough, as training on sufficient data is crucial to maximizing potential. The future may shift towards new paradigms, such as enhancing reasoning capabilities and leveraging "test time compute," which could lead to breakthroughs in artificial general intelligence.

Source

Open-Source AI and the Future

By

Dean W. Ball

Open-source AI is becoming increasingly important in the global technological competition, particularly between the US and China. It has the potential to revolutionize everyday devices and public services, making advanced intelligence accessible to everyone. However, to fully realize this potential, society must imagine and build new institutions that leverage AI for positive outcomes while addressing inherent risks.

Source

Future-Proof Your Machine Learning Career in 2025

A career in machine learning is increasingly important and requires ongoing skill development to remain competitive. Key areas to focus on include mastering core technical skills, embracing emerging trends, and developing essential soft skills. Staying adaptable and committed to continuous learning is crucial for future-proofing your career in this evolving field.

Source

3 Easy Ways to Fine-Tune Language Models

Language models are crucial for many business applications and can be improved through fine-tuning. Three methods for fine-tuning include full fine-tuning, parameter-efficient fine-tuning (PEFT), and instruction tuning, each with its own advantages. These techniques help customize models for specific tasks, reduce computational demands, and enhance generalization capabilities.

Source

Inside Anthropic's Race to Build a Smarter Claude and Human-Level AI | WSJ

Anthropic is working to create an advanced AI called Claude, aiming for human-level intelligence. The company is focused on making AI more aligned with human values and safety. This race to develop smarter AI is significant in the growing tech landscape.

Source

DeepSeek R1's recipe to replicate o1 and the future of reasoning LMs

By

Nathan Lambert

DeepSeek R1 is a new reasoning language model trained through a four-stage reinforcement learning process. It is MIT-licensed, allowing others to build upon its outputs for further development of reasoning models. The training involved combining supervised fine-tuning with large-scale RL to enhance reasoning capabilities and general usability.

Source

Rewiring the Internet: Commerce in the Age of AI Agents

By

Sahar Mor

The rise of AI agents is transforming online commerce by enabling seamless agent-to-agent interactions that streamline payments, marketing, and customer support. New protocols for agent-driven transactions will enhance security and personalization while reducing human involvement. Businesses that adapt to this shift will lead the future of digital commerce.

Source

Model Collapse by Synthetic Data is fake news [Investigations]

By

Devansh

Model collapse occurs when low-quality synthetic data reduces the diversity of training data for AI models, leading to degraded performance over time. Maintaining data diversity is crucial, as it prevents models from forgetting rare or improbable events. Properly generated synthetic data can enhance AI training, but it must be combined with real data to ensure effectiveness and avoid collapse.

Source

A Practical Guide to Integrate Evaluation and Observability into LLM Apps

Opik allows users to track and evaluate the performance of large language models (LLMs) across various metrics like relevance and factuality. It integrates easily with popular LLMs and tools, providing real-time monitoring and insights into system behavior. The guide walks through setting up Opik, tracking functions, and building a Retrieval-Augmented Generation (RAG) application for efficient evaluation.

Source

Titans: Learning to Memorize at Test Time

The authors introduce a new neural long-term memory module called Titans, which enhances attention mechanisms by allowing models to utilize historical context. This approach combines short-term and long-term memory, leading to improved performance in various tasks like language modeling and reasoning. Experimental results show that Titans outperform Transformers and linear recurrent models, effectively handling larger context windows.

Source

AI in Hedge Funds and High Finance

Keep reading with a 7-day free trial

Subscribe to Society's Backend to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Logan Thorneloe
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share

Copy link
Facebook
Email
Notes
More