Artificial intelligence has great potential to support Process Safety Management (PSM) by helping teams make more consistent and defensible decisions—especially around a common but critical question:
Does this change qualify as a Replacement in Kind (RIK), or does it require a formal Management of Change (MOC)?
Step 1: Define the PoC Objective
The PoC aims to demonstrate that a Large Language Model (LLM) can:
- Review proposed changes and classify them as RIK or MOC
- Explain its decisions based on OSHA standards and company-specific policies
- Improve consistency across PSM teams by surfacing relevant reasoning
This is not about replacing human judgment—but augmenting it with faster, standards-informed decision support.
Step 2: Anchor the AI in Recognized Guidance
Rather than relying solely on legacy classification examples, this PoC will instruct the AI using publicly available regulatory texts and internal documentation:
Sources of Truth:
- OSHA 1910.119(l) — The U.S. federal standard for Process Safety Management
- Internal company policies and procedures — Especially MOC, PSM, and engineering standards
- Open-source safety references — Such as guidance from NIOSH, DOE, or industry-wide white papers
These documents help the AI:
- Understand what qualifies as RIK
- Recognize examples, exceptions, and edge cases
- Assess safety, design, function, and interchangeability
Two Options for Leveraging These Sources:
➤ A. Prompt-Based Model with Retrieval-Augmented Generation (RAG)
- Extract and chunk relevant content (300–800 tokens) from OSHA, internal docs, and open resources
- Store in a vector database (e.g., Pinecone, Weaviate)
- At runtime, the model retrieves relevant sections to inform each decision
✅ Best for fast prototyping using hosted models like GPT-4 or Claude
➤ B. Fine-Tune a Private Model
- Build a labeled dataset using Q&A pairs from your MOC documentation and public guidelines
- Include borderline cases and policy interpretations
- Fine-tune a compact LLM (e.g., Mistral or LLaMA) using LoRA for local/private deployment
✅ Best for long-term use where offline control is a priority
Step 3: Select a Secure and Cost-Effective LLM Platform
Criteria:
- Usage-based or rental pricing to manage PoC costs
- Data privacy guarantees (no training on user inputs)
- Enterprise compliance and regional hosting options (e.g., Canada or U.S.)
Recommended options:
- Azure OpenAI Service (GPT-4 with enterprise controls)
- Amazon Bedrock (Anthropic Claude or Cohere with secure hosting)
- Hugging Face Inference Endpoints (for fine-tuning open-source models)
Step 4: Curate a Sample Training and Validation Dataset
Use a de-identified subset of your company’s historical MOC and RIK cases to validate the AI agent’s reasoning.
Ensure:
- All identifying information is masked
- Each case includes a rationale or disposition
- The dataset contains a mix of obvious and borderline examples
This helps validate the AI’s logic—not override regulatory definitions.
Step 5: Build the AI Workflow
Prototype example:
- User submits a proposed change
- AI classifies it as RIK or MOC
- AI returns:
- Classification (RIK / MOC)
- Confidence score
- Reasoning with references to OSHA and internal policies
Optional: Results can be visualized in a simple UI or Power BI dashboard for feedback and case review.
Step 6: Validate the Results
Evaluate the AI against real cases:
- Precision/recall metrics for RIK and MOC classification
- Rate of false positives and negatives
- Agreement between AI and SME decisions
Ask reviewers to flag:
- Misclassifications
- Ambiguous cases that require improved logic
- Opportunities to refine internal references used by the model
Step 7: Iterate, Scale, or Pause
If results are promising:
- Expand training data and refine prompts
- Pilot the tool in live workflows (with human review loop)
- Explore integration with your FACILEX or MOC tracking system
If results are mixed:
- Revisit dataset or retrieval strategy
- Try different models or prompting techniques
- Document lessons learned for a future round
Safeguarding Confidentiality
Throughout the project:
- Do not use production data in public LLMs
- Only anonymized, sanitized data is shared with AI platforms
- Involve legal/security teams in reviewing data and vendor terms
Use platforms with strict “no training on inputs” policies, confirmed in writing.
Closing Thoughts
This PoC offers a low-risk way to explore AI’s ability to support one of the most judgment-heavy tasks in PSM. By grounding the model in transparent, regulatory-aligned content—such as OSHA and internal procedures—you maintain decision integrity, boost consistency, and improve the efficiency of the MOC process.



