A common concern we hear about AI is whether or not data that gets uploaded into a ChatGPT or Grok will be sucked up into their system and used to train their models.
The short answer is no, it won’t. The slightly longer answer is that there is an easy to find setting “improve the model for everyone” that you can turn off, and “temporary chat,” which ensures that even your local version of ChatGPT won’t recall your session.
Of greater concern should be cybersecurity. ChatGPT, Claude, and other chatbots are web-based tools with normal authentication, so can be hacked just like any other cloud software (or for that matter, any software at all). Where AI differs is its breadth of use - we get so comfortable popping information into it that we can relax our normal standards and store information as uploads or even answers to questions that we normally would be much more careful about.
Similarly, private chatbot accounts that are used for company business run the risk of data and content walking away with employees as they change jobs - in a much more useful format that might normally be the case. Where possible, it is much better for a company to provide enterprise access, for example through ChatGPT’s “Teams” subscription, than allow private chatbots - at least for now while we’re all learning to use them securely.
Jun 18, 2025 — Member Update
Jun 18, 2025 - The latest entry in SMACNA’s Political Zoom Series features John G. Murphy from the U.S. Chamber of Commerce for a discussion on the basics of tariff policy so you can make informed business decisions.
Jun 18, 2025 - The longtime SMACNA legislative advocate discusses several topics, including the importance of contractors attending the event and why relationship building is so critical.
Jun 18, 2025 - Field Materials, a SMACNA Silver Associate Member, has partnered with the New Horizons Foundation to release a first-of-its-kind benchmarking survey focused on material procurement and equipment rental practices within...