All of which suggests an interesting twist for the near future of AI. In a long-context world, maybe the organizations that benefit from AI will not be the ones with the most powerful models, but rather the ones with the most artfully curated contexts. Perhaps we'll discover that organizations perform better if they include more eclectic sources in their compiled knowledge bases, or if they employ professional archivists who annotate and selectively edit the company history to make it more intelligible to the model. No doubt there are thousands of curation strategies to discover, if that near future does indeed come to pass. And if it does, it will suggest one more point of continuity between the human mind and a long-context model. What matters most is what you put into it.
Journalist and writer Steven Jonhson, author of Where good ideas come from among others, has been working very closely (as an employee?) with the team building NotebookLM at Google.This is his latest thinking about what an even longer context windows for LLMs mean in practice. The very last paragraph also reminded me about a soundbite in the Twiml podcast. Paraphrasing: “If you wish for smarter models, you haven’t done enough thinking yourself.”The tools are becoming increasingly more powerful, but it is still some hard work needed to max the value created from them.
Visit the source: adjacentpossible.substack.com