: Automated Plant Executive Intelligence | Kieran 'K' Phang

Case Study

·

Semiconductor Manufacturing · OSAT · Industry 4.0

APEX

Automated Plant Executive Intelligence

A structured decision workflow prototype that compresses the gap between factory signals and executive action — built for semiconductor packaging operations.

Origin

In February 2026, I attended a university career fair where a process engineer from Amkor Technology described a recurring operational problem: when equipment goes down at any global facility, the path from detection to resolution runs entirely through email and phone calls — and can take up to 12 hours. The ask was specific — an AI system that ingests raw factory data and outputs a structured explanation of what changed and what to do about it, without manual assembly.

I researched several concurrent Amkor job postings — a Sr. Director of AI/ML, a Sr. Director of Factory Automation, a GenAI Software Engineer, and two FP&A leadership roles — and confirmed the problem was not isolated. It was being addressed simultaneously across Engineering, IT, and Finance. APEX independently arrived at the same architecture those hires were chartered to build.

The internship application was not selected. The systems logic stands independently. APEX is now publicly available as a general-use operational intelligence prototype for any OSAT, semiconductor manufacturer, or advanced packaging operation facing the same class of problem.

Demo Walkthrough

Full pipeline walkthrough — data ingestion through executive briefing export. Recorded on the live prototype using a synthetic dataset modeled on global OSAT factory operations.

Run It Locally

Option A

Anthropic API

# Clone and install

git clone https://github.com/YOUR_USERNAME/apex

cd apex && npm install

# Add your API key

cp .env.example .env

# Edit .env → set ANTHROPIC_API_KEY

# Start

npm run dev

Get an API key at console.anthropic.com

Option B

Local LLM via Ollama

# Install Ollama → ollama.ai

ollama pull llama3

# Set provider in .env

AI_PROVIDER=ollama

# No API key required

npm run dev

No data leaves your machine. Fully air-gapped operation. Recommended for sensitive production data.

Loading sample data

Two synthetic datasets are included. Dataset A uses structured incident IDs that encode factory and tool class directly — best for demonstrating cross-site pattern detection on the sample pipeline. Dataset B uses upload-format IDs with LINE-{SITE}-{LINE} identifiers and readable tool class names — best for demonstrating that APEX works with real-world MES export formats. Use the download links above or click "Load Sample Data" inside the app.

Executive Abstract

Global OSAT and semiconductor packaging operations — factories across South Korea, Japan, China, Vietnam, Taiwan, the Philippines, Portugal, and the United States — face a shared structural problem. When a machine goes down at any of these sites, the path to resolution involves emails, phone calls across time zones, manual data collection from multiple systems, and a diagnosis that can take up to 12 hours.

That 12-hour window is not a technology failure. It is an information routing failure. The data to diagnose the problem exists. The engineers who can fix it exist. What is missing is a system that connects them — automatically, with context already structured.

In parallel, product managers and business unit analysts spend significant time each week manually assembling operational data from fragmented sources into PowerPoint decks for leadership review. The deck gets built. The decision still lags.

APEX is a structured decision workflow prototype that addresses both problems. It takes raw factory event data, normalizes it across sites, computes decision-relevant KPIs, detects cross-factory tool-level patterns, and generates a structured executive briefing automatically. The goal is not to replace engineering judgment. It is to eliminate the manual assembly step that delays it.

Decision Loop Compression — Before vs. After APEX

BEFORE

Manual workflow — up to 12 hours

Detection

local only

Email / Call

cross time zones

Data Pull

manual, fragmented

Diagnosis

tribal knowledge

Decision

hours later

~12 hrs

avg. resolution latency

AFTER APEX

Automated pipeline — under 30 seconds

Detection

any site

Auto-Normalize

schema mapped

KPI + Diagnose

AI-computed

Structured Brief

auto-generated

Decision

seconds later

<30 sec

data-to-briefing pipeline

APEX does not automate engineering judgment — it eliminates the manual assembly step that delays it.

Problem Landscape

The problems APEX addresses were not stated directly. They were inferred from three overlapping signals: a direct conversation with Amkor Technology's process team at a university career fair, patterns embedded in publicly available job postings, and the structural realities of operating a global OSAT at advanced packaging complexity. Concurrent hiring research confirmed each inference independently.

Downtime Resolution Latency

Equipment failures trigger a manual chain: detect → email → call → pull data from multiple systems → diagnose → act. Across global time zones, this averages up to 12 hours. The bottleneck is not the repair. It is the information routing.

Fragmented Operational Data

Factories across Korea, Japan, Vietnam, and Portugal operate different MES systems with different fault code schemas and yield formats. Corporate receives summarized spreadsheets, not structured feeds. No unified semantic layer exists.

Manual Reporting Cycle

Product managers and engineers spend hours each week pulling data from disparate systems, consolidating into Excel, and building PowerPoint decks for leadership review. It is high-effort, low-value work that delays the decisions it is meant to inform.

Profitability Visibility Lag

Product-level profitability requires joining ERP cost data, production actuals, and pricing structures — all done manually. Margin reporting is periodic and retrospective. Erosion can go undetected for weeks before surfacing in a review cycle.

Pricing and Allocation Misalignment

Pricing decisions across business units are made without a unified view of factory capacity, tool utilization, or real-time cost structure. Volume allocation lags actual equipment loading. Decisions are made from spreadsheets that are already stale.

Executive Reporting Friction

The organization's primary internal communication artifact is a manually built PowerPoint deck assembled every review cycle. Every hour spent building it is an hour not spent on the engineering judgment the deck is supposed to accelerate.

System Vision

APEX is not a dashboard. It is a decision pipeline. A dashboard presents data. A decision pipeline converts data into a structured, actionable briefing without requiring a human to assemble it. The full vision is a cross-factory operational intelligence layer — a system connecting factory signals to corporate decisions across an entire global manufacturing footprint.

System Architecture — Data to Executive Action

LAYER 1 — RAW FACTORY DATA

MES exports · alarm logs · downtime events · yield records · lot genealogy · ERP proxies

LAYER 2 — SCHEMA NORMALIZATION

Cross-factory field mapping · fault code harmonization · alias resolution · confidence scoring · line_id factory extraction

LAYER 3 — METRICS COMPUTATION

KPI rollups: yield · scrap · downtime · utilization · throughput proxy · margin proxy · data-driven severity scoring · anomaly flags · baseline delta

LAYER 4 — LLM INFERENCE (PRIVACY-FIRST LOCAL DEFAULT)

SITREP generation · tool-class cross-site pattern detection · root cause hypotheses · preemptive check recommendations · narrative synthesis

LAYER 5 — STRUCTURED EXECUTIVE OUTPUT

5-section briefing · financial exposure panel · recommended actions · shift handover draft · export-ready briefing

Privacy-First Default — No external data egress by default · Ollama local inference supported

Five Core Use Cases — Long-Term Vision

UC-1 Global Downtime Intelligence Engine

Problem

12-hour resolution cycle driven by manual information routing across time zones.

Measurable Impact

Reduce mean time to resolution from ~12 hours to under 2 hours. At $8,500/hr throughput loss, a 10-hour reduction per incident represents direct revenue recovery.

Output Format

Structured incident brief — tool class, fault signature, confidence-ranked root causes, recommended first action, escalation path.

UC-2 Equipment Utilization and Volume Allocation Optimizer

Problem

Allocation decisions made from static spreadsheets with no unified cross-factory visibility into actual equipment loading.

Measurable Impact

2–5% OEE improvement through better allocation. Reduction in customer delivery misses caused by capacity misreads.

Output Format

Real-time utilization dashboard with allocation recommendation engine. Margin vs. utilization heatmap by factory and tool class.

UC-3 Continuous Product Profitability Monitor

Problem

Profitability reporting is periodic and retrospective. Margin erosion from yield fluctuations or cost shifts can go undetected for weeks.

Measurable Impact

Catch margin erosion weeks earlier. Inform re-pricing negotiations with live data rather than stale cycle reports.

Output Format

Real-time gross margin view by product line and factory. Automated alert when margin drops below threshold with AI-generated variance explanation.

UC-4 Automated Executive Narrative Engine

Problem

Manual slide consolidation is a recurring high-effort task across all business units every review cycle.

Measurable Impact

Reduce analyst reporting workload by an estimated 50%. Enforce consistent reporting language across business units.

Output Format

Auto-generated weekly briefing deck. 5-section executive digest with factory-by-factory delta analysis and KPI deviation narrative.

UC-5 Cross-Factory Yield and Process Drift Detection

Problem

Yield degradation can occur gradually across sites without centralized pattern recognition. Common supplier issues may appear as isolated local events.

Measurable Impact

Prevent multi-million dollar yield excursions. Surface systemic supplier or equipment fleet issues before they cascade across factories.

Output Format

Early warning alerts. Cross-factory comparison dashboard. Root cause hypotheses ranked by confidence with preemptive check recommendations for unaffected sites.

Demo Implementation

Scope Boundary

The current APEX implementation is a focused prototype demonstrating the core data-to-briefing pipeline using synthetic datasets modeled on real global factory footprints. It is not a production system. The distinction between prototype capability and production readiness is maintained explicitly throughout.

Demo Pipeline — Five Steps, Under Five Minutes

01

Upload CSV

Drag-drop or load sample dataset

02

Normalize

Schema map · alias resolve · factory extract

03

Compute KPIs

Yield · downtime · severity · margin proxy

04

LLM Diagnose

SITREP · tool-class pattern detect

05

Briefing Out

5-section brief · export ready

What the Demo Includes

  • Two synthetic datasets — structured sample and upload-format MES export
  • Multi-file ingestion with schema auto-mapping, alias resolution, and factory extraction from line_id
  • KPI computation: yield, scrap, downtime, utilization, throughput proxy, margin proxy
  • Data-driven severity scoring from downtime, scrap rate, and yield fields — not ID string heuristics
  • Factory color coding with severity bar indicators — factory identity and incident severity shown simultaneously
  • Tool-class cross-site pattern detection — flags same tool class failing across multiple factories with preemptive check recommendations for unaffected sites
  • Collapsible cross-site pattern panel with horizontally scrollable pattern cards
  • Structured SITREP with confidence-scored root cause hypotheses and cross-factory pattern synthesis
  • Pre-written executive directives — copy-ready, role-specific, incident-specific
  • Financial exposure panel with configurable throughput rate
  • Printable executive briefing export
  • Local LLM support via Ollama — no external data egress

What the Demo Explicitly Does Not Include

  • Live MES integration (requires internal data access)
  • Real production data of any kind
  • Authentication or multi-tenant isolation
  • Formal automated test suite
  • Production-hardened API security
  • Role-based access control
  • Persistent database (local JSON only)
  • Lot genealogy tracing
  • Customer commitment risk scoring
  • Pricing or CapEx optimization models

What Changed During Development

The initial prototype worked correctly on the hand-crafted sample dataset but failed silently on real-format uploads. Diagnosing and correcting those failures is where the substantive engineering work happened. These are not bugs that were patched — they are architectural decisions that were revised after the failure mode was understood.

Factory identity derived from incident ID string parsing

Symptom

All uploaded incidents resolved to UNKNOWN factory. Factory breakdown panel showed a single grey block. Cross-site patterns never triggered.

Root Cause

Sample data used structured IDs like K5-EQ-VACUUM-04-001 where the first token encodes the factory. Uploaded MES exports use generic IDs like INC-2024-001. The parser returned INC for every row.

Resolution

Factory is now extracted from the line_id field using a regex that pulls the site code from LINE-{SITE}-{LINE} format. This matches how real MES systems tag production lines. Factory identity is persisted through the full pipeline as a first-class field.

Severity derived from incident ID keyword matching

Symptom

All uploaded incidents showed nominal severity regardless of actual downtime or scrap values. Sample data showed correct severity because its IDs contain EQ, PROC, YIELD keywords.

Root Cause

toneForIncident() parsed keywords from the incident ID string. Uploaded IDs contain no keywords. The function had no access to the actual operational data fields.

Resolution

Severity is now computed from downtime_minutes, scrap_units, and yield rate during ingestion and persisted as severity_tone. ID keyword parsing remains as a fallback for sample data compatibility only.

Cross-site patterns grouped by event_type only

Symptom

Pattern cards showed DOWNTIME across 9 factories — technically correct but not actionable. An engineer cannot do anything with that signal.

Root Cause

The grouping key was event_type alone. No tool identity was included. Every downtime event at every factory was treated as evidence of the same pattern.

Resolution

Patterns now group by tool class combined with event type. A pattern fires when the same tool class has the same failure type at two or more factories. Each card shows which factories are not yet affected and recommends preemptive inspection at those sites.

Future Capabilities

The current prototype establishes the core data-to-briefing pipeline. The following capabilities represent the next layer of operational intelligence — each grounded in a specific failure mode that the existing system cannot yet address.

F-1 Shift Handover Generation Near-term

At end of shift, APEX automatically produces a structured handover document: what happened, what is still open, what the incoming shift needs to watch immediately. Every factory does this manually every 8 hours. The pipeline already has everything needed — it is a prompt and output format change, not an architecture change.

F-2 Recurring Anomaly Detection Across Sessions Near-term

Currently, each upload is treated as an isolated dataset. If EQ-PROBE-04 had a chuck alignment failure last week and again today, APEX does not surface that recurrence. Persistent incident history across sessions enables trend detection that a single-session system structurally cannot provide.

F-3 Maintenance Window Recommendation Engine Medium-term

Given downtime patterns, tool failure frequency, and planned production schedules, APEX suggests optimal PM windows. The preemptive check logic already identifies which tools to inspect — this extends it with a when and where recommendation tied to actual utilization windows.

F-4 Lot Genealogy Tracing Medium-term

When a yield excursion hits, which lots are affected? Which customer shipments are at risk? APEX currently knows scrap counts but not which specific lots they belong to. Lot-level tracking turns the financial exposure panel from an estimate into an actual liability calculation — the difference between an alert and an actionable customer communication.

F-5 Supplier and Fleet Correlation Flagging Medium-term

If three tools from the same equipment vendor fail in the same week across different sites, that is a fleet-wide signal — possibly a bad firmware update, a consumable batch issue, or a design defect. Tool class grouping already surfaces the pattern; vendor attribution turns it into an escalation path.

F-6 Natural Language Data Querying Step-change

Instead of uploading a CSV and waiting for the pipeline, an engineer types: 'show me all EQ-PROBE incidents at K5 in the last 30 days with downtime over 2 hours.' The assistant briefing panel already exists — this extends it with persistent storage and a structured query interface. APEX becomes interactive, not just batch.

F-7 Baseline Drift Detection Over Time Step-change

The current system compares against a static baseline. A more powerful version tracks rolling baselines per tool class per site. If K5's EQ-PROBE-04 degrades gradually over six weeks, APEX catches the drift before it becomes a failure event. This is the difference between reactive and predictive — the capability that justifies the system's long-term operational value.

F-8 Customer Commitment Risk Scoring Step-change

Given a yield excursion or downtime event, APEX estimates the probability of missing a committed ship date for affected products. This reframes operational data in terms executives and sales actually act on — not 'we had 200 minutes of downtime' but 'there is a 70% probability we miss the Qualcomm shipment on Friday.' That is a fundamentally different conversation.

Technical Architecture

Frontend

Next.js 16

App Router + React 19 + TypeScript

AI Layer

Vercel AI SDK

Streaming responses + structured JSON output

Charts

Recharts

Interactive KPI + utilization views

Storage

Local JSON

No external DB — demo scope only

Current API Surface

POST /api/ingest

Accepts CSV, JSON, text via multipart — schema maps to canonical incident format, factory extracted from line_id

GET /api/datasets

List all ingested dataset index entries

GET /api/incidents

List normalized incident summaries with factory, severity_tone, and tool_class fields

GET /api/metrics?incident_id=…

Return computed MetricsPayload with KPIs, anomalies, top contributors

POST /api/generate

Stream structured SITREP JSON with cross-site pattern context — baseline or what-if mode

POST /api/assistant

Stream AssistantInsight — follow-up Q&A, data gap requests, manager update copy

Expansion Path

Phase 1 — Current Prototype

CSV ingestion — structured and upload-format Factory extraction from line_id Schema normalization + KPI computation Data-driven severity scoring Factory color coding + severity bar indicators Tool-class cross-site pattern detection Preemptive check recommendations Collapsible cross-site pattern panel SITREP generation with cross-factory context Pre-written executive directives Financial exposure panel Local LLM via Ollama Printable briefing export

Phase 2 — Live Data Integration

MES API connection or scheduled exportReal-time incident detection and alert routingPersistent database replacing local JSONMulti-factory concurrent monitoringShift handover generationRecurring anomaly detection across sessions

Phase 3 — Profitability and Lot Intelligence

ERP cost data integrationReal-time margin monitoring by product and factoryLot genealogy tracingCustomer commitment risk scoringSupplier and fleet correlation flaggingParametric pricing model

Phase 4 — Predictive Intelligence and Role-Based Automation

Baseline drift detection over timeNatural language data queryingMaintenance window recommendation engineRole-based views — engineer vs. manager vs. executiveAutomated scheduled reportingEvidence citations per recommendation — full auditability chain

Engineering Tradeoffs

Deliberate decisions made in the prototype and the production implications of each.

Decision Prototype Choice Production Implication
PersistenceLocal JSON filesReal datastore with versioning, migration support, and cross-session history
Factory ResolutionRegex extraction from line_id; ID-prefix fallback for structured samplesValidated factory mapping tables maintained per MES version per site
Severity ScoringThreshold rules on downtime, scrap rate, yieldML classifier trained on historical labeled incidents per process type
Cross-Site PatternsTool class + event type grouping; factories.size ≥ 2 thresholdStatistical similarity clustering across normalized event vectors with confidence scoring
AI InferenceExternal API or local Ollama — user's choiceOn-premise or VPC-hosted inference — no raw data leaves facility network
Output FormatHTML-to-PDF via browser printTemplated PPTX generation or direct Copilot integration
AuthenticationNone (demo only)Role-based access — engineer, manager, executive, auditor
Financial EstimatesConfigurable $/hr rate sliderActual throughput and margin data from ERP integration

Why This Matters

Advanced semiconductor packaging — flip chip BGA, chiplets, 2.5D integration, wafer-level processing — operates at tolerances and complexity levels where decision latency has direct, measurable financial consequences. A 12-hour downtime resolution window on a critical flip chip bonding tool is not just an operational inconvenience. At industry-standard throughput rates, it is a quantifiable revenue loss, a potential customer commitment miss, and a yield recovery cost that compounds across the lot.

The engineers and product managers who manage these operations are not lacking expertise. They are lacking a system that organizes information at the speed their expertise requires. Every hour spent manually assembling a spreadsheet or building a slide deck is an hour not spent on the engineering judgment that actually resolves the problem.

APEX is built on the premise that the right role for AI in manufacturing is not to make decisions — it is to compress the distance between signals and the humans who make them. The goal is not to replace the process engineer. It is to ensure the process engineer has the right information, structured correctly, before the 12 hours are up.