A common misconception I run across a lot is that AI learns while you use it. The idea being that as you use a given AI, it will learn from each session and get better.
This is at least part of why there is a persistent concern about the safety of data in an AI system - if it can remember and learn from what I just did, surely someone else will have access to that information?
This is not how AI works. AI models are trained all at once, then deployed for use. We refer to this as “pre-training” and “inference,” where the inference part is what you, as a user, actually interact with. To train a model, the data needs to be organized, analyzed, and debiased as much as possible.
Chatbots like ChatGPT will refer to “memory” as a feature to make the system better at working with you. Still, these are just sentences the system has condensed from some of what you’ve done with it, or told it about yourself, and they can be accessed in the personalization settings. Those memories rarely, if ever, have sensitive information; they’re just tips on how to answer you more effectively, like “works in construction,” or “is 53 years old.”
Although work is underway to develop AI systems that learn while in operation, none are currently available; it’ll likely take some time before they become a reality. In the meantime, think of AI models as engines vs. brains that learn. They can do useful things, but their effectiveness doesn't increase with use.
Oct 15, 2025 — Member Update
Oct 15, 2025 - The program presents both opportunities and challenges as the construction and integration of microreactors will require skilled contractors in mechanical systems, sheet metal, ductwork, HVAC, and industrial fabrication.
Oct 15, 2025 - The rule initially required FERC to hold off on granting construction authorization for natural gas facilities while rehearing requests were still pending.
Oct 15, 2025 - National Careers In Trades Week recognized as a finalist in the Social Campaign Across Platforms Category