A common misconception I run across a lot is that AI learns while you use it. The idea being that as you use a given AI, it will learn from each session and get better.
This is at least part of why there is a persistent concern about the safety of data in an AI system - if it can remember and learn from what I just did, surely someone else will have access to that information?
This is not how AI works. AI models are trained all at once, then deployed for use. We refer to this as “pre-training” and “inference,” where the inference part is what you, as a user, actually interact with. To train a model, the data needs to be organized, analyzed, and debiased as much as possible.
Chatbots like ChatGPT will refer to “memory” as a feature to make the system better at working with you. Still, these are just sentences the system has condensed from some of what you’ve done with it, or told it about yourself, and they can be accessed in the personalization settings. Those memories rarely, if ever, have sensitive information; they’re just tips on how to answer you more effectively, like “works in construction,” or “is 53 years old.”
Although work is underway to develop AI systems that learn while in operation, none are currently available; it’ll likely take some time before they become a reality. In the meantime, think of AI models as engines vs. brains that learn. They can do useful things, but their effectiveness doesn't increase with use.
Oct 15, 2025 — Member Update
Nov 25, 2025 - The next entry in SMACNA’s Capitol Hill series features longtime transportation official Sabrina Sussman for a discussion of the critical issues that contractors should be informed of.
Nov 25, 2025 - December 10th is the last day to register for the one-and-a-half day workshop designed to equip you with the knowledge and skills needed to negotiate effectively.
Nov 25, 2025 - Join us on May 6-8 2026, for the flagship SMACNA event focusing on key regulatory and legislative issues for our industry!