Using Generative AI Responsibly

by Andy Cooke

CISO Neil Hoosier & Associates, and Principal SSO for OFM Coordination of Benefits and Recovery Program

Security and Privacy Risks

With the rise of AI tools like ChatGPT, organizations must be aware of security and privacy risks before allowing employees to use them. The main risks include:

  • Sharing sensitive data with AI providers that may use it to improve their models or fail to secure it properly.
  • Accidentally disclosing restricted data, such as HIPAA-protected information, if the AI provider is not authorized to handle it.

Reducing These Risks

Organizations can take several steps to reduce security and privacy risks, including:

  • Creating policies and guidelines to educate employees on safe AI use.
  • Approving only AI tools that meet minimum security standards.
  • Using technical safeguards such as monitoring AI interactions.
  • Exploring self-hosted AI models to keep data within the organization.

Clear policies are essential, even if monitoring AI use is difficult. Key policies may include:

  • Allowing only company-approved AI tools for work-related tasks.
  • Prohibiting AI for recruitment candidate analysis, personnel decisions, or employee monitoring.
  • Strictly prohibiting employees from sharing sensitive data with any AI tools.
  • Requiring employees to report accidental data sharing to IT or security teams immediately.

Security Best Practices for AI Tools

When selecting AI tools for employees, organizations should ensure the service meets security standards, such as:

  • Not using user inputs for AI model training.
  • Having security certifications like SOC2, ISO, FedRAMP, or HITRUST.
  • Following clear data retention and deletion policies.
  • Encrypting data at rest and in transit.
  • Providing strong access controls and authentication, such as multi-factor authentication (MFA).

Organizations that manage their employees’ computers and networks can also use existing security tools, such as:

  • Data Loss Prevention (DLP) tools to prevent unauthorized file uploads.
  • Web filters and firewalls to block non-approved AI websites.

Neil Hoosier & Associates, Inc. (NHA) has already implemented these best practices and tools to safeguard our clients’ as well as our own company’s data and information. 

Other Considerations

Organizations may also choose to host their own AI models on internal servers (e.g. Ollama.ai, LMStudio, PrivateGPT) or in a secure cloud environment (e.g. AWS Bedrock, Azure OpenAI).

Free resources are available to help manage AI risks, such as:

Since AI is evolving quickly, security risks and solutions will also change. NHA will stay updated to ensure responsible and secure AI use and encourages all organizations to do the same.

With nearly 30 years of experience in IT and cybersecurity leadership, Andy specializes in aligning security strategies with business objectives to drive organizational success. As emerging technologies like Generative AI introduce new security and privacy challenges, Andy is committed to identifying and addressing these risks—ultimately enabling the responsible and strategic use of AI for long-term resilience and innovation.

Our Certifications

Capability Maturity Model Integration

Small Business Administration 8(a): Business Development Program

Minority Business Enterprise (MBE) of GA, KS, MA, MD, NY, NYC, PA and WI

Department of Transportation (DOT) Disadvantaged Business Enterprise