Saturday, December 2, 2023

Is That True?

 

My husband, the perennial jokester, has always been fond of sharing tales both amusing and incredible, for which our daughter in her early childhood learned to ask, "Mom, is that true?" We've all long since grown in our ability to research outlandish claims—and even some more sedate and believable ones—but I still find that question from her childhood to be pertinent to many of the situations we face today.

The question "Is that true?" comes to mind once again as genealogists—not to mention, the modern world at large—explore how we can use artificial intelligence, or AI, to advance our research progress. Of course, AI has been working behind the scenes in various stages for several years now, but with the advent of such tools as ChatGPT, it seems as if people are looking at an entirely new invention, rather than an evolution of capabilities developed by the technology world.

Recently, I was reading an article in the free monthly e-newsletter "Genealogy Notebook" from the Polish Genealogical Society of America. The article, "How to Get Useful Answers to Your Genealogy Questions," provided a few links in a discussion about using an alternate search engine, Microsoft Bing, on account of its AI chat feature called Copilot. The main article referenced—and by the same title—was from DiAnn Iamarino Ohama's blog, Fortify Your Family Tree.

While admittedly, the information the author found in exploring the AI-assisted search engine seemed impressive, I have been witness to other times when AI provided answers with incorrect information—yet presented it with as straight a face as a computer could muster. This is when my daughter's childhood voice bubbles up in the back of my mind and I find myself repeating her mantra: "Is that true?"

Apparently, with enough people experiencing the same dilemma, there are now articles exploring what to do when the AI-generated information is not true. Last spring, PC Magazine ran an article with the hopeful title, "That's Not Right: How to Tell ChatGPT When it's Wrong"—which presumably would work when using similar AI tools. There are many other articles which have taken up that same lament: the answers, unfortunately, are not always true.

What to do seems to be a practical response to our disappointment with what had promised to be a great utility. I've seen directions ranging from telling the chatbot it's wrong to giving it permission to say, "I don't know." In one memorable post, Seth Godin seemed to paint the thing with a need for the human quality of humility versus overconfidence—or, at least, for us to be the humans in the exchange and recognize that "when a simple, convenient bit of data shows up on your computer screen, take it with a grain of salt."

Still, the promise of creating a machine that can amass a good portion of the world's knowledge and offer it up to us simply for the asking has an enormous draw. We all can imagine research scenarios—a.k.a. those intractable brick wall ancestors—where AI could become the dream machine, rather than the answer hallucinator it currently seems to be. We certainly don't need to write it out of our research repertoire just yet. Current releases like Thomas MacEntee's August launch of the online journal, AI and Genealogy Guide, hint of things to come in this evolving application of artificial intelligence. And we certainly don't need to view it as a new development in the future—developments in AI technology have already been benefitting genealogy for years

Just when we family history lovers thought we had done enough to make our peace with technology, it turns out there is vastly more to come. As long as we recognize it for what it is and what these tools can do for our research, remembering that the humans in the equation are still entitled to ask "Is that true?" should keep us from falling into that bottomless pit of idolizing, then excoriating, the machine that can help us improve.

No comments:

Post a Comment