Generic LLM Chatbot Attestation

LLM-powered chatbots are here to stay. As a result, I am playing around with a disclaimer to recommend for clients. After all, if the LLM says that it’s fine to mix chlorine and ammonia to clean the sink, then that chatbot user needs to be told to probably confirm it before unintentionally killing themselves.

Right now it offers three links. One of the research bits is by an LLM-happy company (Microsoft) and the other is by the BBC with Apple’s removal of Apple Intelligence serving as validation via a news article. I feel more than that means folks may not read it. Also, rule of three or something.

There is a separate button to launch the modal. I figured putting it on the submit button itself means people won’t read it (they want that sweet stochastic response). Even if it appears only on the first submit, they wouldn’t encounter it for a much later submit in the same session (say, after they get a sandwich). Instead I have the other button, styled visually as a link (ugh) to try to pique curiosity (Ctrl + what?). There would be language elsewhere that also puts the blame squarely on the user if they kill themselves with the aforementioned cleaning cocktail.

In no way is this demo meant to be a showcase for modal dialogs, form fields, accessible names, buttons, color themes, or much of anything else. It’s just the really basic interaction and the words with which I am futzing. So if you want to comment on my, I dunno, focus styles, then yeah. No.

See the Pen Untitled by Adrian Roselli (@aardrian) on CodePen.

For now this embeds a private CodePen. Once I think I like where this lands I will convert it to my own hosted page and delete the CodePen.

Updated: 20 February 2025

I made some changes:

As noted, this is not production-ready. This is a prototype.

2 Comments

Reply

That “over reliance” thing from Microsoft is real interesting. At my job, they’ve been pushing AI into everything and in some cases, like AI used to generate drafts of quarterly reviews, includes a “how much editing happened” tracker. We seem to have some awareness that it could be a problem but the lopsidedness of the “Use AI for everything” and “But also use your brain” towards the former has been… uh.. disconcerting.

In response to Will. Reply

My bigger concern is that “how much editing happened” feedback is further training to attenuate the output until some arbitrary goal is hit and humans can be removed from the workflow. Insert seed corn metaphor here.

Leave a Comment or Response

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>