Agentic Engineering / Foundation

The Future of AI Agents Just Arrived ( /goal for Claude Code & Codex)

Treat `/goal` as an agentic completion contract: state the desired outcome, define proof of done, and let the agent continue through planning, execution, and verification.

Jay E | RoboNuggets14 minTranscript-ready

Quick learning frame

Read this before watching.

Agentic engineering is the discipline of turning fuzzy intent into scoped, verifiable agent work packets with taste and review built in.

This is a concise operating pattern for moving from prompt-response work to outcome-driven agent sessions.

Watch for the shift from claim to mechanism. The learning value is the point where the transcript reveals a repeatable action, tool boundary, context move, review habit, or artifact.

Concept diagram

Where this video fits.

01Intent
02Task Packet
03Agent Run
04Evidence
05Review
06Standard

Deep lesson

Turn this video into working knowledge.

3,022 cleaned transcript words reviewed across 806 timed caption segments.

Thesis

The Future of AI Agents Just Arrived ( /goal for Claude Code & Codex) teaches a practical agentic engineering move: Treat `/goal` as an agentic completion contract: state the desired outcome, define proof of done, and let the agent continue through planning, execution, and verification.

The goal is not to remember the video. The goal is to extract the operating principle, tie it to timestamped evidence, test how far the claim transfers, and make something reusable.

1:14

Problem frame

“actually introduced by Codex just 2 weeks ago. It got really popular over at X and now the Entropic team pretty much just copied it from Codeex and I think that's fine because that just gives us more...”

Name the problem or capability the video is actually trying to teach before you list any tools.

6:42

Working mechanism

“weekly rate limits reset, you won't have any regrets with this huge portion of your tokens being unutilized. But now, let's actually use gold for a more complex task. And what we'll do is do that for both...”

Study the mechanism: what context, tool, setup, or workflow change makes the result possible?

9:46

Transfer moment

“a lot to be desired when it comes to the design and the visuals of this. And interestingly, Claude code when it described the different AI labs in here, it described entropic as honest, helpful, and harmless. Google...”

Convert the demonstration into an artifact, checklist, or operating rule you can use again.

01

Intent

Start with this video's job: Treat `/goal` as an agentic completion contract: state the desired outcome, define proof of done, and let the agent continue through planning, execution, and verification. Treat "Intent" as the outcome you are trying to make visible, not a topic label. Anchor it to 1:14, where the video says: “actually introduced by Codex just 2 weeks ago. It got really popular over at X and now the Entropic team pretty much just copied it from Codeex and I think that's fine because that just gives us more...”

02

Task Packet

Use "Task Packet" to locate the part of the agentic engineering workflow the video is demonstrating. Ask what changes in your real setup if this claim is true. Anchor it to 6:42, where the video says: “weekly rate limits reset, you won't have any regrets with this huge portion of your tokens being unutilized. But now, let's actually use gold for a more complex task. And what we'll do is do that for both...”

03

Agent Run

Turn "Agent Run" into the reusable artifact for this lesson: A task packet that a coding agent could execute without wandering. This is where watching becomes something you can inspect and reuse.

04

Evidence

Use "Evidence" as the application surface. Decide whether the idea touches a browser flow, a local file, a model choice, a source document, a UI, or a review step.

05

Review

Use "Review" to prove the lesson. The evidence should connect back to the video title, transcript anchors, and a concrete output, not a generic best-practice claim.

06

Standard

Use "Standard" to carry the idea forward: save the prompt, checklist, diagram, or operating rule that would make the next agent run better.

Example

Source-backed work packet

Convert the video into a scoped task that includes the transcript claim, target workflow, acceptance criteria, and proof. The output should be a task packet that a coding agent could execute without wandering..

Example

Claim vs. demo brief

Separate what the speaker claims, what the demo actually proves, and what still needs outside verification before you adopt the workflow.

Example

Teach-back module

Transform the lesson into a definition, a mechanism diagram, one misconception, one practice exercise, and a check-for-understanding question.

Do not learn it wrong
  • Treating the title as the lesson without checking what the transcript actually says.
  • Letting the prompt drift into generic advice that could apply to any video in the playlist.
  • Copying the tool setup without identifying the operating principle that transfers to your own stack.
  • Skipping the artifact, which means the learning never becomes operational or inspectable.

Transcript-derived moments

Use timestamps to study the actual video.

Quality check

Do not count this as learned until these are true.

01

State the transcript-backed claim in your own words: Treat `/goal` as an agentic completion contract: state the desired outcome, define proof of done, and let the agent continue through planning, execution, and verification.

02

Explain the practical stakes without hype: This is a concise operating pattern for moving from prompt-response work to outcome-driven agent sessions.

03

Map the idea onto the Intent -> Task Packet -> Agent Run -> Evidence -> Review -> Standard sequence and name the weakest link.

04

Produce the artifact and include the evidence that proves it: A task packet that a coding agent could execute without wandering.

Put it into practice

Give this grounded prompt to Codex or Claude after watching.

You are helping me turn one specific YouTube video into real, durable learning.

Source video:
- Title: The Future of AI Agents Just Arrived ( /goal for Claude Code & Codex)
- URL: https://www.youtube.com/watch?v=aEDq1bBynOg
- Topic: Agentic Engineering
- My current learning frame: Treat `/goal` as an agentic completion contract: state the desired outcome, define proof of done, and let the agent continue through planning, execution, and verification.
- Why this matters: This is a concise operating pattern for moving from prompt-response work to outcome-driven agent sessions.

Transcript anchors from this exact video:
- 1:14 / Evidence 1: "actually introduced by Codex just 2 weeks ago. It got really popular over at X and now the Entropic team pretty much just copied it from Codeex and I think that's fine because that just gives us more..."
- 3:00 / Evidence 2: "Cloud Code will do is try to complete that task and then check itself at the end of that turn at the end of that particular loop if it satisfied this definition of done that you gave it."
- 4:39 / Evidence 3: "use goal in order to batch create let's say articles or content or other regular automations that you need to be running the following week and just do it today before your weekly rate limits reset using the..."
- 6:42 / Evidence 4: "weekly rate limits reset, you won't have any regrets with this huge portion of your tokens being unutilized. But now, let's actually use gold for a more complex task. And what we'll do is do that for both..."
- 9:46 / Evidence 5: "a lot to be desired when it comes to the design and the visuals of this. And interestingly, Claude code when it described the different AI labs in here, it described entropic as honest, helpful, and harmless. Google..."
- 11:23 / Evidence 6: "house to pick. So, let's say we want to play as entropic. And there you go. It has much nizer visuals because of that GPT image 2 capability. And let me just zoom out here so we can..."
- 12:56 / Evidence 7: "order to oneshot as much as possible a project like this then what I would do is take a lot of time to just refine this definition of done and this acceptance criteria because that is what the..."

Your task:
1. Use the transcript anchors above as the primary source packet. If you add outside context, label it clearly as outside context and keep it secondary.
2. Create a source-check table with columns: timestamp, claim, what the demo proves, confidence, and what still needs verification.
3. Extract the actual teachable claims from the video. Do not invent claims that are not supported by the title, lesson frame, or transcript anchors.
4. Build a reusable learning artifact: A task packet that a coding agent could execute without wandering.
5. Include:
   - a plain-English definition of the core idea
   - a diagram or structured model using this sequence: Intent -> Task Packet -> Agent Run -> Evidence -> Review -> Standard
   - 3 concrete examples that apply the video idea to real agentic work
   - 2 failure modes the video helps prevent
   - a checklist I can use the next time I run Codex or Claude
   - one practical exercise with a clear done signal
6. Add a "learning transfer" section: what changes in my workflow tomorrow if I actually learned this?
7. Add a "source check" section that cites which transcript anchor supports each major takeaway.

Quality bar:
- Make this specific to "The Future of AI Agents Just Arrived ( /goal for Claude Code & Codex)", not a generic Agentic Engineering essay.
- Prefer operational examples, failure modes, and reusable artifacts over broad definitions.
- Call out uncertainty instead of smoothing over weak evidence.
- If evidence is weak, say what transcript segment or timestamp needs review instead of guessing.
- Finish with a concise artifact I could paste into my learning app.

Misconceptions

What to stop believing.

Agentic engineering means letting agents do everything.

It means designing work so agents can do bounded pieces well.

Code review is optional if tests pass.

Tests catch behavior. Review catches architecture, readability, maintainability, and product judgment.

Practice studio

Learning only counts when you make something.

01

Transcript evidence map

Separate what the video actually says from what you already believe about the topic.

3 source-backed takeaways with timestamps, confidence, and a transfer note.
02

One useful artifact

Apply the video to a real workflow and produce a task packet that a coding agent could execute without wandering..

A reusable artifact with a done signal and one verification step.
03

Teach-back card

Explain the lesson to someone who has not watched the video yet.

A 90-second explanation, one diagram, one example, and one misconception to avoid.

Recall check

Can you answer without rewatching?

What is the video asking you to understand?

Treat `/goal` as an agentic completion contract: state the desired outcome, define proof of done, and let the agent continue through planning, execution, and verification.

What makes this lesson trustworthy?

It is backed by 3,022 transcript words and timed transcript moments.

What should you make after watching?

A task packet that a coding agent could execute without wandering.

Source shelf

Use the video as a doorway, then verify with primary sources.

ReadingOpenAI Prompt Engineering Guide

Use this to sharpen instructions, examples, constraints, and tool-use prompts.

platform.openai.com/docs/guides/prompt-engineering
DocsClaude Code overview

Read this to compare Codex-style workspace operation with Claude Code’s agentic coding model.

docs.anthropic.com/en/docs/claude-code/overview
ReadingGoogle Engineering Practices: Code Review

Strong baseline for turning human review taste into reusable agent review criteria.

google.github.io/eng-practices/review/
PodcastLenny’s Podcast: Head of Claude Code

A practical discussion of what changes when coding agents become central to engineering work.

www.lennysnewsletter.com/p/head-of-claude-code-what-happens
PodcastNo Priors podcast

Good strategy and builder-level context, including recent conversations around agentic engineering and AI-native products.

podcasts.apple.com/us/podcast/no-priors-artificial-intelligence-technology-startups/id1668002688
PodcastLatent Space: The AI Engineer Podcast

Best recurring feed for AI engineering, agents, evals, codegen, and infrastructure.

www.latent.space/podcast