Introducing psm.ai the definitive research library for Artificial Intelligence in Process Safety Management

Launching an AI Agent for RIK vs MOC Classification

This blog outlines a step-by-step approach for launching a proof-of-concept (PoC) AI agent that can assist in the classification of Replacement in Kind (RIK) vs Management of Change (MOC) using a combination of regulatory guidance, open frameworks, and real-world examples—while ensuring data confidentiality and cost efficiency.
Launching an AI Agent PoC for RIK vs. MOC Classification

Artificial intelligence has great potential to support Process Safety Management (PSM) by helping teams make more consistent and defensible decisions—especially around a common but critical question:

Does this change qualify as a Replacement in Kind (RIK), or does it require a formal Management of Change (MOC)?

Step 1: Define the PoC Objective

The PoC aims to demonstrate that a Large Language Model (LLM) can:

  • Review proposed changes and classify them as RIK or MOC
  • Explain its decisions based on OSHA standards and company-specific policies
  • Improve consistency across PSM teams by surfacing relevant reasoning

This is not about replacing human judgment—but augmenting it with faster, standards-informed decision support.

Step 2: Anchor the AI in Recognized Guidance

Rather than relying solely on legacy classification examples, this PoC will instruct the AI using publicly available regulatory texts and internal documentation:

Sources of Truth:

  • OSHA 1910.119(l) — The U.S. federal standard for Process Safety Management
  • Internal company policies and procedures — Especially MOC, PSM, and engineering standards
  • Open-source safety references — Such as guidance from NIOSH, DOE, or industry-wide white papers

These documents help the AI:

  • Understand what qualifies as RIK
  • Recognize examples, exceptions, and edge cases
  • Assess safety, design, function, and interchangeability

Two Options for Leveraging These Sources:

A. Prompt-Based Model with Retrieval-Augmented Generation (RAG)

  • Extract and chunk relevant content (300–800 tokens) from OSHA, internal docs, and open resources
  • Store in a vector database (e.g., Pinecone, Weaviate)
  • At runtime, the model retrieves relevant sections to inform each decision
    Best for fast prototyping using hosted models like GPT-4 or Claude

B. Fine-Tune a Private Model

  • Build a labeled dataset using Q&A pairs from your MOC documentation and public guidelines
  • Include borderline cases and policy interpretations
  • Fine-tune a compact LLM (e.g., Mistral or LLaMA) using LoRA for local/private deployment
    Best for long-term use where offline control is a priority

Step 3: Select a Secure and Cost-Effective LLM Platform

Criteria:

  • Usage-based or rental pricing to manage PoC costs
  • Data privacy guarantees (no training on user inputs)
  • Enterprise compliance and regional hosting options (e.g., Canada or U.S.)

Recommended options:

  • Azure OpenAI Service (GPT-4 with enterprise controls)
  • Amazon Bedrock (Anthropic Claude or Cohere with secure hosting)
  • Hugging Face Inference Endpoints (for fine-tuning open-source models)

Step 4: Curate a Sample Training and Validation Dataset

Use a de-identified subset of your company’s historical MOC and RIK cases to validate the AI agent’s reasoning.

Ensure:

  • All identifying information is masked
  • Each case includes a rationale or disposition
  • The dataset contains a mix of obvious and borderline examples

This helps validate the AI’s logic—not override regulatory definitions.

Step 5: Build the AI Workflow

Prototype example:

  1. User submits a proposed change
  2. AI classifies it as RIK or MOC
  3. AI returns:
    • Classification (RIK / MOC)
    • Confidence score
    • Reasoning with references to OSHA and internal policies

Optional: Results can be visualized in a simple UI or Power BI dashboard for feedback and case review.

Step 6: Validate the Results

Evaluate the AI against real cases:

  • Precision/recall metrics for RIK and MOC classification
  • Rate of false positives and negatives
  • Agreement between AI and SME decisions

Ask reviewers to flag:

  • Misclassifications
  • Ambiguous cases that require improved logic
  • Opportunities to refine internal references used by the model

Step 7: Iterate, Scale, or Pause

If results are promising:

  • Expand training data and refine prompts
  • Pilot the tool in live workflows (with human review loop)
  • Explore integration with your FACILEX or MOC tracking system

If results are mixed:

  • Revisit dataset or retrieval strategy
  • Try different models or prompting techniques
  • Document lessons learned for a future round

Safeguarding Confidentiality

Throughout the project:

  • Do not use production data in public LLMs
  • Only anonymized, sanitized data is shared with AI platforms
  • Involve legal/security teams in reviewing data and vendor terms

Use platforms with strict “no training on inputs” policies, confirmed in writing.

Closing Thoughts

This PoC offers a low-risk way to explore AI’s ability to support one of the most judgment-heavy tasks in PSM. By grounding the model in transparent, regulatory-aligned content—such as OSHA and internal procedures—you maintain decision integrity, boost consistency, and improve the efficiency of the MOC process.

Share:

More Posts

A Real-World Test of Generative AI in Process Safety

Replacement-in-Kind vs. Management of Change

Determining whether a change is Replacement-in-Kind (RIK) or requires Management of Change (MOC) is a critical process safety decision. This post explores whether Generative AI can help make that call more consistently and reliably.

Lessons from the Helium Supply Disruption

Hidden Dependencies in PSM: Lessons from the Helium Supply Disruption

Recent geopolitical instability, including conflict involving Iran, has exposed a structural vulnerability in global helium supply. While helium is often treated as a niche industrial gas, its role in high-hazard operations is disproportionately critical. For many facilities, helium underpins inerting, purging, leak detection, and analytical systems that are foundational to safe operation.
As supply tightens, the issue is not simply cost or availability. It is the introduction of unmanaged process safety risk into systems that were designed with stable helium supply as an implicit assumption.

Migrating to Microsoft Azure Government Cloud

Migrating to Microsoft Azure Government Cloud

As organizations in safety-critical and regulated industries modernize their digital infrastructure, cloud platform selection has become a matter of governance, risk, and compliance, not just IT. The migration of operational systems to Microsoft Azure Government reflects a deliberate move toward an environment engineered to meet the highest standards of security, data control, and operational resilience.

For organizations managing Process Safety Management (PSM) programs, this transition provides measurable improvements in both cybersecurity posture and system reliability, directly supporting safer and more consistent operations.