The information overload with AI is real. Everyone shares a copy of meeting notes after the calls, because everyone has an "AI assistant" making notes of the meeting. Even though everyone uses the same tool, everyone's notes are slightly different from each other.
Some people even add their own self-written notes + the AI note taking apps notes, and cook them together in yet another LLM and produce elaborate bullet points.
The meetings notes are mostly too verbose, because AI doesn't know how to write short, crisp bullets, which need not be grammatically correct. In way, AI knows how to "summarise", but not how to "take notes". Since all these notes are too verbose, people don't really read them, they ask yet another AI to further "summarise". In this whole journey, some actual key information gets lost. People are a lot less focussed in the meeting too now, given that they know their "AI assistant" is making notes anyway. They are half distracted, doing some other work parallelly during the meeting.
From roadmaps to technical docs to RFCs to performance reviews - AI has made it so much easier to ̶w̶r̶i̶t̶e̶ generate them, that there are just way more of them now. Instead of just writing up a POC - people throw a 6-pager RFC or tech-spec at you first. All the conventional wisdom says "great engineers write great architecture docs" and "great managers write great vision docs" - so now that ̶w̶r̶i̶t̶i̶n̶g̶ generating them is easier, everyone's writing tons of them. All career-guidance and professional how-to books advice you to "write more" to progress further in your career, because in the older world (Amazon meeting agenda 1-pagers, Stripe press releases), writing more would actually train you more to write more concise and succinct documents, and develop the muscle to coalesce opinions via the written word. Simply generating more docs via LLMs do not actually make you better at that. On the contrary, now that you generate more noise with too many documents, no one knows which ones are more important, and unable to keep up, start ignoring your ̶w̶r̶i̶t̶i̶n̶g̶s̶ generations.
The wiki has a "summarise with AI" button, the Zoom call has a "summarise with AI" button. The Google Doc has a "rewrite with AI" button. Everyone who is a "human in the chain" is turning out to be a glorified network packet switcher between different LLM models. Generate text, send to someone, who summarises it using another LLM, without reading it much (either the original, or the summary), and forwards it, to someone, who rewrites it again using AI and turns it into a presentation, which then is used in a meeting, which gets summarised using AI, and saved in the wiki.
There are more meetings now, because more people have the ability to write 'meeting agendas'. There are more 'stakeholders' now, because anyone can write a cursory 'remark' on your documents using the AI tool. There are bigger committees to review performance now, because all packets have started to look and read more or less the same, and harder to calibrate.
The internal AI tooling team's KPIs have been through the roof. They just got fresh budget to hire PhDs and purchase a GPU cluster to fine-tune yet another model only on "internal data", which increasingly is itself not human-generated content anymore.
Welcome to the post-genAI world. And ofcourse, you can use Grok3 to tl;dr-ify this post if it is too long to read (I am sure it is).
Feels like we're moving from Information Age to Information Recycling Age. Do you think writing things yourself will still matter or will AI take over everything?