← Back to Blog

Learning AI with AI: How Gabriel Petersson's Unconventional Path Illuminates a Practical Roadmap for Working Technologists

This article distills a practical action framework for working technologists, based on the Extraordinary podcast's interview with OpenAI research scientist Gabriel Petersson (YouTube Original | Bilibili Mirror).


In December 2024, a young man from a small Swedish town—without a university degree—officially joined the OpenAI Sora team as a Research Scientist, a role traditionally reserved for PhDs.

His name is Gabriel Petersson, 23, a high school dropout. His resume lists no degrees, but boasts a string of projects: recommendation systems at Depict.ai, frontend architecture at Dataland, engineering at Midjourney, and ultimately, research contributions to Sora 2.

On November 28, 2025, he recapped his journey in detail on the Extraordinary podcast. Many media outlets have spun this as a "genius underdog" or "degrees are useless" story. But after listening closely to the hour-plus interview, I believe what's truly worth dissecting isn't his talent, but his method—and what that method means for experienced professionals looking to pivot.

1. Recursive Knowledge Filling: A Counterintuitive Yet Efficient Learning Paradigm

The traditional ML learning path is bottom-up: start with calculus, then linear algebra, then probability, statistical learning, deep learning... Only after years can you touch real models. This route typically takes two to three years, and many give up during the theoretical foundations.

Gabriel did the exact opposite. On the podcast, he calls his process "recursive knowledge filling":

He starts with a concrete problem—say, "I want to understand how Diffusion Models generate video." Then he asks ChatGPT to write a full implementation and starts running it. The code inevitably breaks, or there's a line he doesn't understand. Instead of reaching for a textbook, he goes straight to the AI: "What exactly is this ResNet residual connection doing? Why does it help gradients flow? What's the mathematical intuition? Can you draw a diagram to help me get a feel for it?"

He highlights two key signals in the interview. First is the ability to notice what you don't understand—"You have to keep asking yourself: do I really get this?" This isn't innate; it requires deliberate practice. Second is the moment of insight—when you probe deeply enough, and suddenly everything clicks. He says these two signals are the hallmarks of effective learning.

Most crucially: He doesn't use AI to write code, but to understand code. He explicitly rejects "Vibe Coding"—the practice of tossing requirements to AI and using the output without examining the implementation. He reads every line of AI-generated code, making sure he understands each one.

The Fatal Blind Spot in This Method

But I want to point out an issue Gabriel didn't fully address: lack of completeness.

Recursive filling is essentially "flashlight learning"—you illuminate only the path in front of you, without knowing the map of the whole forest. For well-bounded tasks (like understanding a ResNet module), this works well. But when you need to make system-level architectural decisions, problems arise.

Examples abound:

A search engineer tackling relevance ranking, unaware of the entire Learning to Rank field, hand-codes a scoring logic via rule engines. It works, but is orders of magnitude less efficient and hits a low ceiling. This is the cost of "Unknown Unknowns." Gabriel's recursive method can turn "Unknown Unknowns" into "Known Unknowns"—but only if you stumble into the blind spot. If the solution lies in a knowledge branch you've never visited, recursion can't help.

Another example: a software engineer tries to write an autonomous navigation program, tracking vehicle position with GPS. He notices GPS data is noisy, so he designs a sliding window average filter: take the mean of the last N points to smooth the trajectory. It's better than raw data, but lags badly on turns and can't keep up at speed. He spends weeks tuning window sizes, weights, and hacks, always struggling between "smoothness" and "responsiveness." Yet this problem was elegantly solved in the 1960s by the Kalman Filter—combining motion model predictions and sensor observations, dynamically adjusting trust, yielding both smoothness and real-time performance. This isn't obscure knowledge; it's standard in autonomous driving, robotics, and aerospace. But if you've never browsed the table of contents in control theory or signal processing, you don't even know what keywords to search.

This is the cost of "Unknown Unknowns." Recursive filling can help you turn "Unknown Unknowns" into "Known Unknowns"—but only if you first encounter that blind spot. If the solution lies on a branch you've never traversed, recursion is powerless—you don't even know what to ask the AI.

My Approach: The T-shaped AI Learning Method

Combining Gabriel's "wild path" with the value of systematic education, a more pragmatic dual-track model is:

Horizontal—Build an Index. Find a few classic textbooks in the field (Bishop's PRML, Goodfellow's Deep Learning Book), but don't read them cover to cover. Just skim the table of contents and each chapter's introduction. Spend a week or two letting AI walk you through a one-sentence explanation of each concept. The goal isn't understanding, but awareness of existence. Your brain needs a coarse-grained knowledge map, so when you hit a problem in a project, you at least know which direction to search.

Vertical—Recursive Deep Dives. This is Gabriel's mode. When tackling a concrete problem, use AI to drill into blind spots until you reach first principles.

In a nutshell: Use the table of contents to draw the map; use AI to navigate the maze.

For example, if you're a working technologist starting an AI course, don't slog through chapter one. Instead, scan the syllabus, build a knowledge index, then when facing a concept you don't get during a project (say, a neural network classification task), refer to the index and recursively dig in with ChatGPT. This approach lets you transition from engineer to running ML experiments independently, all while holding down a full-time job.

2. Career Breakthrough: The "Digital Elevator" That Bypasses Resume Filters

Gabriel shared a vivid sales story on the podcast. Early on, no one would buy their recommendation system—cold emails vanished, phone pitches failed. Then he had an idea: scrape target customers' websites in advance, generate recommendations with his own model, print A3 comparison charts—left side, the customer's current recommendations; right side, theirs. Then he'd go door to door.

He went further: he carried an injection script that could swap out recommendations live in the customer's browser and run A/B tests on the spot. Often, he'd get customers to switch on the first meeting.

The underlying logic: lower the decision-maker's cognitive cost. HR screens resumes to minimize risk—no degree is a risk. But CEOs and tech leads are incentivized by growth—whoever solves the problem gets the nod. Gabriel's approach put results directly in front of decision-makers, letting them see the value in seconds.

He landed the Midjourney gig in a similar way: met someone at an event, demoed his work, and got the opportunity directly. As he summarized on the podcast, career development isn't a deterministic path, but "a series of small opportunities stacking up."

Where's the "Digital Elevator" for Regular People?

Gabriel's story is set in San Francisco, where you can physically meet tech decision-makers. For those outside Silicon Valley, knocking on doors isn't realistic. But the digital world's channels are always open:

Open source contributions are the most direct elevator. Fix a stubborn bug or submit a high-quality PR to a well-known project, and your name lands in the maintainer's notifications. When you lack a top degree, publicly verifiable output is your diploma.

"Permissionless internships" are another route. Don't wait for an offer—just do the work for your target company or team. See an open source issue languishing? Write the PR. Notice a product that could be improved? Build a demo and send it to the tech lead. As Gabriel bluntly put it: "Companies just want to make money. If you can prove you help them make money, if you can write code, they'll hire you."

The key isn't your title, but whether you can make your value obvious in 30 seconds.

3. Risk Management: Neither Glorifying Hardship Nor Avoiding Fear

Gabriel admits on the podcast that he once had a "distorted perception of reality"—he was 100% convinced he'd be a billionaire, so he could sleep on scavenged couch cushions and work all night. He also acknowledges he was "very lucky"—his cousin's startup gave him his first ticket into the industry.

This honesty is important. If we only see the "dropout who made it to OpenAI" narrative, it's easy to fall into survivor bias: "just work hard and you'll succeed." Reality is: for working professionals with mortgages, families, and fixed expenses, Gabriel's all-in gamble is far too risky.

A more reasonable approach is Nassim Taleb's Barbell Strategy:

One end is certainty (90% of your effort): Keep your current job, maintain your income floor. Don't let your learning pivot threaten your survival.

The other end is high-risk experiments (10% of your effort): Use nights and weekends to do an AI side project, write a tech blog, submit an open source PR. Each small experiment is a low-cost lottery ticket—most won't win, but if one hits, it could open a new career channel.

Gabriel's own path actually reflects this logic: he first gained hands-on experience at his cousin's startup, then freelanced as a contractor, then earned O-1 visa eligibility through Stack Overflow contributions, and only then joined Midjourney and OpenAI. Each step was an incremental risk on top of the last, not a one-shot gamble.

Conclusion

Gabriel Petersson's story is not an argument for "degrees are useless"—he himself admits that lacking a diploma is still a limitation in some scenarios. What his story really shows is: in the AI era, the monopoly on knowledge acquisition is broken, but the bar for deep understanding hasn't lowered.

The tools have changed, but the underlying logic remains. You still need the patience to read code line by line, the persistence to chase questions until you reach insight, and the courage to showcase your work publicly. AI isn't a shortcut; it's an accelerator—but you're still at the wheel.

If you're an experienced professional transitioning to AI, my advice is: don't wait until you're "ready" to start, and don't quit your job to gamble. Build your index with tables of contents, drive your learning with projects, use public output to substitute for credentials, and manage risk with the barbell strategy.

This isn't a "perfect strategy." It's simply a pragmatic way to keep moving forward in uncertainty.