A second look at LLMs and Memory
PLUS the Anthropic SDK makes tool use easier
This week is a bit different, for my primary post for the week, I‘ve actually guest published over on Packt Deep Learning stack.
This is a follow up post to a previous one of mine regarding memory: The Memory Illusion: Teaching your AI to Remember. So please follow the link above for that post.
Also note that I’ve kept working on the demonstration app referenced in the post. It now has a web UI and is meant to make the memory activities transparent to help you to learn how LLMs leverage this type of implicit memory system. (github)
In addition to the full post above, I have a sort of micro post this week
In reviewing the Anthropic SDK docs I noticed this enhancement to tool use which alleviates the dev from some boilerplate and adds some additional niceties.
Let the SDK Run Your Tools
Anthropic’s Tool Runner automates the call-execute-reply cycle.
If you have built an agent with Claude, you know the drill. You send a prompt, check if the model wants to call a tool, parse the JSON, execute your function, append the result to the history, and then call the API again.
It is the “Tool Use Loop,” and writing it manually is tedious boilerplate that breaks when you forget to append the assistant message.
Anthropic’s SDKs now includes a beta feature called Tool Runner that automates this entire cycle. It turns a multi-step state management problem into a single function call. As a beta feature, expect refinements, but the core pattern is solid.
Here is how it works and why you should switch.
The Problem: The “Ping-Pong” Effect
Keep reading with a 7-day free trial
Subscribe to Altered Craft to keep reading this post and get 7 days of free access to the full post archives.


