Select Page


Pretty soon we will all have our own personal chatbot. The first will probably be in the next Windows update, whereas every ounce of metadata associated with your suite of Microsoft 365 apps (Teams, Word, Excel, PowerPoint, Outlook, and OneDrive) will be combined into one bucket of fodder for your chatbot to munch on. So when for example I ask my own chatbot (let’s call him Ralph), “Ralph, what are the similarities between effective CPM in linear cable and the long tail of programmatic advertising”, it will sift through reams of my notes, blogs, presentations, emails and God knows what else, to inform Me of the answer that I would have come up with (or something similar) had I poured over that same data set night and day for 5 years. Only it will do it in 5 seconds. Not only will it synthesis information on the topic, but if “trained” properly will illuminate correlational insights which would have otherwise been impossible to spot; I don’t know… peoples who see the same banner ad more than 12 ½ times have an increased propensity for making ham and eggs on Wednesdays. Now imagine that multiplied by every person with Windows…

Pretty sick.

Now, GPT-4, LLaMA and the like, are performing a similar trick, except those programs are using THE Internet as their frame of reference. Meaning, since there is an endless amount of information about every conceivable topic out there, AI will be able to tell you almost everything about anything. There’s one small problem however. I have a problem with the “Intelligence” part of the “Artificial Intelligence” equation. Why? Garbage in, Garbage out. I would venture to say that a full HALF of what is available in the whole of the Human corpus of data is either, outdated, not accurate, misleading or outright wrong.

So, I think, we’ll need a filter to set some parameters. Something like, limiting the data pool to include only well vetted , peer reviewed, respected and thoroughly evaluated information. OK did I say I only had one small problem? I lied. I have one more. What if for example we ask AI to forecast the stock market for us and it comes back with – In 2032 there will be an unavoidable epic collapse of the world economy. Did the very act of asking it set in motion the restrictive measures and panic which lead to the eventual collapse? A self-fulfilling prophecy?

Or let’s say we ask “it” to debunk the Rare Earth hypothesis (the conclusion that since there are literally millions of exoplanets that do not emit any telltale signs of life, complex extraterrestrial life must therefore be a very improbable phenomenon and extremely rare) and it serves up this beauty – In all likelihood life is common but sophisticated societies collapse on themselves before they are ‘discoverable’. But only slightly more probable than that scenario is that entire planets are frequently anesthetized by Quasar blasts. What riots may ensue?

Maybe I worry too much…just saying (informal). Hopefully the good outweighs the bad and that one day, AI doesn’t send a Military grade Ghost Robot Dog to my house to blow me away for implying that it was unintelligent. That’s not what I’m saying, Ralph …. We Are!