A common misconception I run across a lot is that AI learns while you use it. The idea being that as you use a given AI, it will learn from each session and get better.
This is at least part of why there is a persistent concern about the safety of data in an AI system - if it can remember and learn from what I just did, surely someone else will have access to that information?
This is not how AI works. AI models are trained all at once, then deployed for use. We refer to this as “pre-training” and “inference,” where the inference part is what you, as a user, actually interact with. To train a model, the data needs to be organized, analyzed, and debiased as much as possible.
Chatbots like ChatGPT will refer to “memory” as a feature to make the system better at working with you. Still, these are just sentences the system has condensed from some of what you’ve done with it, or told it about yourself, and they can be accessed in the personalization settings. Those memories rarely, if ever, have sensitive information; they’re just tips on how to answer you more effectively, like “works in construction,” or “is 53 years old.”
Although work is underway to develop AI systems that learn while in operation, none are currently available; it’ll likely take some time before they become a reality. In the meantime, think of AI models as engines vs. brains that learn. They can do useful things, but their effectiveness doesn't increase with use.
Oct 15, 2025 — Member Update
Oct 21, 2025 - Jake Olsen, CEO of Stratus, joins Angie Simon to talk about how the SMACNA associate member works to assist contractors and how the space must collaborate to find the next generation of workers.
Oct 21, 2025 - Program is meant to celebrate the exceptional work by contractors in promoting a safe, secure and conducive environment for their workers.
Oct 21, 2025 - By attending this essential program, you'll gain the confidence and knowledge needed to navigate the complexities of collective bargaining effectively.