The framework · for mid-tier oil & gas operators

Four closed loops. One QA discipline. The architecture behind verified agentic AI in oil & gas.

Most “agentic AI” pitches in oil and gas ship the agent without the QA layer underneath. The agent looks impressive in a demo. Six months later it has drifted, nobody knows when, and the operator is back to alarms and spreadsheets. WorkSync runs four end-to-end closed loops across two products on one Data Hub. WellOPS owns the production work loop: Operations, Safety Analysis, and Preventative Maintenance, with Willie as the AI field agent. FlowSync owns the engineering work loop: Automated Engineering, with Taylor as the AI engineer agent. Both run on the WorkSync Data Hub. The QA discipline (six elements) runs underneath every loop. This page is the framework. Read it once and the rest of the site makes sense as one architecture, not eleven products.

Detect · Score · Route · Execute · Learn · with constraint enforcement, drift detection, audit trail

WorkSync · WellOPS · FlowSync

One company. Two products. One Data Hub.

The company

WorkSync

WorkSync's mission is to leverage technology to drive efficiency and safety in field operations. We see a world where there are zero fatalities and engineers never spend time on data entry again.

Two AI products on one Data Hub. WellOPS for field operations. FlowSync for engineering. The Data Hub is the read-only integration backbone that connects to your existing SCADA, ERP, CMMS, GIS, and historian.

The field-ops product

WellOPS

WellOPS's mission is to be the best pump-by-priority system for oil and gas.

An end-to-end closed-loop automated workflow that balances cash flow, risk, and maintenance to maximize value while efficiently managing asset and personnel safety risks. Integrates with whatever systems you already have, or stands up rapidly with none, and gives your field team a management system they will actually love to use.

Modules: Work Engine, Route Optimizer, Field Data Capture, Field Work Management. AI agent: Willie.

The engineering product

FlowSync

FlowSync's mission is to be the platform for building and managing simulation-based studies for fluid flow and rotating equipment.

An AI-driven engineering platform that automates the build, simulation, and maintenance of models across fluid types and rotating-equipment classes. Builds simulator-ready models from the PDFs, P&IDs, GIS, and SCADA you already have. Integrates with the simulators your team already uses (Synergi, HYSYS, ProMax, OLGA, AFT, PipeSim) or replaces them.

Modules: Model Builder, Flow Simulator, Process Simulator. AI agent: Taylor.

The four loops at a glance

Same shape. Different data sources, different buyer, different vendor competition.

Every loop follows detect, score, route, execute, learn. The unifying claim is not that we invented agentic AI. The unifying claim is that we ship four of them on the same data layer with the same QA discipline, and that no other vendor in oil and gas does that under one roof for the mid-tier operator.

Detect
SCADA anomalies, production deviations, alarm-vs-baseline drift
Score
Cash-flow impact × risk × intervention payback
Route
Ranked daily plan in the truck cab by 6 AM
Execute
Mobile app, route-optimized, JSA-gated dispatch
Learn
Closeout outcomes feed back into the ranking model
Primary buyer
VP Ops, Field Operations Manager, Foreman
Vendor competition
Baker Hughes Leucipa, parts of SLB Tela, IBM Maximo
Detect
Engineering work request: hydraulic-model rebuild, debottleneck study, completion design, P&ID extraction
Score
ROI per hour of senior-engineer time × simulator-readiness of input data
Route
Five-agent stack (drawing classification, symbol recognition, topology extraction, data reconciliation, spec extraction)
Execute
Agent-built model goes to engineer for verify-and-sign
Learn
Engineer-verified outputs feed back into model training
Primary buyer
Engineering Manager, Senior Reservoir or Process Engineer
Vendor competition
AspenTech ASI, SLB Petrel/Eclipse, Halliburton DecisionSpace, the manual 200-hour study
Detect
Dispatch trigger: planned crew assignment, JSA prep, lone-worker route, hot-work permit, contractor showing up
Score
Hazard × OQ + weather + asset history + contractor competency + basin rules
Route
Approve, modify, or block dispatch. Hard constraint, not a weight
Execute
Dispatch-enforced qualification. The unqualified worker cannot be dispatched to the unsafe job
Learn
Incident and near-miss outcomes feed back into hazard scoring
Primary buyer
HSE Director, Field Operations Manager, Compliance Lead
Vendor competition
KPA EHS, VelocityEHS, ISNetworld + hardware (Blackline Safety, SoloProtect, Garmin G7)
Detect
Equipment health degradation 48-72 hours before failure: ESPs, rod pumps, compressors, valves, instruments
Score
Cost-of-failure × probability × intervention payback
Route
Work order joins the same ranked queue Loop 1 runs
Execute
Field crew or contractor with the right OQ
Learn
Did the prevent-action prevent the failure? Outcome closes the loop
Primary buyer
Maintenance Director, Reliability Engineer
Vendor competition
IBM Maximo, IFS Cloud, Peloton, MaintainX, Leucipa ESP Optimizer

Per-loop deep dive

One architecture, four instantiations.

Each loop is independently valuable. Most operators activate them in sequence rather than all at once. The order of activation is usually: integrate the data layer first, then Operations, then Preventative Maintenance, then Safety, then Engineering when an engineering project triggers it.

Loop 1, Operations · runs in WellOPS

Ranked daily work in the truck cab by 6 AM. Modules: Work Engine, Route Optimizer, Field Data Capture. AI agent: Willie.

A 26-pumper team chooses from roughly 10699 daily route combinations across a 2,000-well fleet. Humans cannot solve that. Constraint-based optimization scored on cash-flow impact and risk can. The pumper still drives the truck. The foreman still makes the calls only a human makes. The agent does the math the human shift was never going to do well.

The Operations loop is where most mid-tier operators see the largest near-term cash-flow lift. The reference deployment we publish (top 25 private producer, 5,000+ wells across the Western Anadarko, the Permian, and Wyoming) reports 15 percent FCF uplift on the same headcount, 35 percent fewer site visits, and TRIR moving from 1.8 to 0.3 across the same operation. That is one loop running well. The same operator runs three of the four loops today.

Loop 1 is what most agentic-AI pitches in oil and gas are competing on. Baker Hughes Leucipa (with the Lucy conversational assistant) and SLB Tela are the named competitors. The full head-to-head sits on the comparison pages. The differentiator we lean on for mid-tier US operators is time-to-value (4 weeks to a ranked plan in the truck cab), pricing below VP signing authority on the entry tier, and the QA layer underneath.

Loop 2, Automated engineering · runs in FlowSync

The 200-hour study in 20 minutes. Modules: Model Builder, Flow Simulator, Process Simulator. AI agent: Taylor.

A senior engineer building a hydraulic model from scratch spends 200 plus hours per study, with a meaningful share of that time on data preparation rather than analysis. Multiply by 15-plus simulator platforms (Synergi, AFT Fathom and Arrow and Impulse, OLGA, PipeSim, Aspen HYSYS, UniSim, ProMax, EPANET, WaterGEMS, MIKE+, others) and the engineering bench across a mid-tier operator runs at capacity for the foreseeable future. The work that actually moves cash flow, debottleneck studies, M&A integration models, completion redesigns, queues behind the data work.

FlowSync is the engineering work loop. Five specialized agents stacked: drawing classification, symbol recognition, topology extraction, data reconciliation, and spec extraction. Inputs are the operator's existing GIS, drawings, SCADA, and historian. Output is a simulator-ready model that integrates with the simulators the engineering team already uses. The senior engineer's job shifts from re-keying to verifying. Hydraulic-model build time goes from roughly 200 hours to roughly 20 minutes for the model construction step itself, with calibration time dropping proportionately.

Loop 2 competes with AspenTech Subsurface Intelligence (ASI), SLB Petrel and Eclipse, and Halliburton DecisionSpace at the subsurface altitude. FlowSync sits at the production-engineering and gathering/distribution altitude. The two altitudes coexist and the data layer hand-off between subsurface tools and FlowSync is real.

Loop 3, Safety analysis · runs in WellOPS

Dispatch-enforced qualification, hazard scoring, JSA pre-population. Module: Field Work Management. AI agent: Willie.

TRIR is a lagging indicator. The leading indicator is the unsafe dispatches you stop before the truck rolls. The Safety loop scores every dispatch against operator qualification, weather, asset history, contractor competency, and basin-specific rules, and treats the result as a hard constraint rather than a weight. The unqualified worker cannot be dispatched to the unsafe job. The contractor whose ISN/Avetta/Veriforce status lapsed cannot be assigned. The hot-work permit cannot be issued without the gas-test record.

JSA pre-population from asset hazard profile and weather shortens the JSA preparation step. The operator-edit workflow keeps the JSA from going generic; the lead worker on site is the one who knows what only the lead worker on site knows, and the agent hands off to them on confidence-aware routing.

Loop 3 competes against KPA EHS, VelocityEHS, and ISNetworld on the EHS-platform side, and integrates with Blackline Safety, SoloProtect, and Garmin G7 hardware on the lone-worker side. The Field Work Management module in WellOPS is the operations-anchored safety loop, not a standalone EHS suite. The buyer is the HSE Director who already has a contractor-management platform and needs the dispatch-enforcement layer to make compliance into operational behavior.

Loop 4, Preventative maintenance · runs in WellOPS

Anomaly detection 48 to 72 hours before failure. Modules: Anomaly Detection, Predictive Maintenance, Field Work Management for work-order tracking. AI agent: Willie.

Per-well models trained on each well's own history flag deviations 48 to 72 hours before failure. The reason this works is that no two wells are the same and a fixed-threshold alarm system generates more noise than signal at scale. The reason most predictive-maintenance deployments fail is that the anomaly score is generated but never wired into the daily work plan, which leaves the operator in Era 3 of the four-era frame: spent on AI, did not change the workflow.

WellOPS fixes that by feeding the anomaly score directly into the same ranked queue Loop 1 (Operations) runs. Cost-of-failure × probability × intervention payback ranks the work on the same ruler as the production-driven work. ESP intervention, rod-pump workover, compressor maintenance, valve replacement all compete for the same crew-day with all the production work. The optimizer ranks across categories.

Loop 4 competes most directly against IBM Maximo, IFS Cloud, Peloton, MaintainX, and Baker Hughes Leucipa ESP Optimizer. The WellOPS wedge is the integration of preventative maintenance into the same work loop as operations and safety, on the same data layer, with the same QA discipline. Operators running both a CMMS and an anomaly-detection point tool typically pay more for the point-tool stack than for a unified WellOPS deployment.

The QA discipline

Six elements that turn agentic AI from a demo into a system the operator can defend.

The QA layer is what makes the difference between “agentic AI” that compounds and “automation that fails silently.” Every loop respects every element. CISOs, HSE Directors, CFOs, and regulators all ask the same questions, and the answer is the same artifact: the QA layer’s output.

01

Constraint enforcement

Safety qualifications, regulatory windows, basin-specific rules, asset-class limits. Hard constraints, not weights. The optimizer treats them as red lines and refuses to violate them. Pumper OQ status, 811 dig-notice windows, OOOOb compliance posture, AQCC Reg 7 setbacks, NDIC gas-capture targets. Every loop respects every constraint that applies to it.

02

Confidence-aware routing

High-confidence outputs flow through the agent autonomously. Low-confidence outputs route to a human (foreman, engineer, HSE lead) with the agent's reasoning surfaced. The threshold is tunable per loop and per asset class. The agent never silently lowballs uncertainty. The honest "I am not sure, you take this one" is what separates a verified system from a confident-sounding wrong answer.

03

Drift detection

When the data distribution shifts, when a well moves into a new operating regime, when a basin-level rule changes, the model knows. Drift is monitored continuously, not in quarterly reviews. Drift-triggered retraining events are logged and visible to the operator. A model that quietly degrades over six months is the failure mode behind most "the AI used to work" stories.

04

Outcome feedback

Closeout data feeds back into the model. Did the predicted failure happen? Did the recommended intervention prevent it? Did the JSA flag the actual hazard that mattered on site? The flywheel only works if the loop closes; the loop only closes if outcomes are captured at scale and tied back to predictions. The reinforcement-learning literature calls this the credit-assignment problem; the operator calls it "did the recommendation work or not."

05

Audit trail drillable to source

Every decision the agent makes is traceable: which SCADA tag, which work order, which JSA, which OQ record, which simulator output, which permit constraint. The audit trail is the inventory. CFOs, regulators, and CISOs all ask the same question, and the answer is the same artifact. The audit trail is also what survives a personnel change: when the senior pumper retires, the institutional memory does not leave with them.

06

Honest hand-off on exceptions

When the model is genuinely outside its competence, it says so. It does not produce confident-sounding nonsense. The honest "I do not know, escalate" is the discipline that protects credibility. Most failed AI deployments in oil and gas failed because the agent was forced to answer, and the answer was wrong, and trust never recovered. Agents we build are allowed to refuse.

Without the QA layer

Five failure modes we see in deployments that skipped it.

We have walked into operations where a previous AI deployment failed in one of these ways. The pattern is consistent enough that we treat it as a checklist for any new vendor evaluation, including evaluations that compare us to other vendors. If the QA layer is absent, the failure mode is around the corner.

Anomaly detection without economic ranking

ML model fires on every minor SCADA blip and forwards 200 alerts a day to a foreman who already had 200 alerts a day. The model is doing its job. The deployment is failing because anomalies are not the unit of work, ranked tasks are. Without the QA layer's confidence-aware routing and outcome feedback, the agent never learns what matters.

Generative copilot panels grafted onto existing tools

A chat interface that summarizes the screen the user is already looking at. Demo-friendly, operationally pointless. The CMMS dashboard tells you what tickets are open; the copilot tells you the same thing in a sentence. Without the closed-loop architecture, the copilot is a deliverable, not an outcome.

Auto-generated reports nobody reads

Seven-figure programs whose primary deliverable is a 12-page weekly summary that goes into a folder. The diagnostic is simple: who acts on this report? If the answer is "the team reviews it on Friday," the program shipped a deliverable, not an outcome. Without the audit trail and outcome feedback, the report is decoration.

JSA pre-population without operator-edit workflow

Auto-generated JSAs that ship as "set and forget" become generic enough that the field stops reading them. Without confidence-aware routing and the honest hand-off, the JSA agent fails the safety loop the moment it pretends to know what only the lead operator on site knows.

Predictive maintenance without intervention-outcome capture

Models that produce remaining-useful-life predictions but never see whether the recommended intervention prevented the failure. The flywheel never spins. Six months in, the predictions stop matching reality and nobody knows why. Without outcome feedback, predictive maintenance is just expensive prediction.

The diagnostic, every time: did the morning change? If the morning did not change, the AI is decoration.

Common questions

What does "closed loop" actually mean in this context?

A closed loop is an end-to-end system where the AI detects a condition, scores it, routes work to act on it, executes the action, and captures the outcome to feed back into the next iteration. The loop is "closed" when the outcome of the action makes the next prediction better. In oil and gas, most "AI" deployments are open loops: a model produces a prediction, a human ignores or acts on it, and the action outcome never returns to the model. Closed loops are the difference between agentic AI that compounds and analytics that decay.

Why four loops? Why not one big loop or twenty narrow ones?

Four because the buyer profiles and data sources cluster naturally into four (Ops = VP Ops + SCADA, Engineering = Engineering Manager + GIS/historian/drawings, Safety = HSE + OQ + contractor data, Maintenance = Maintenance Director + CMMS + equipment history). One big loop would force every workflow through the same ranking model and break under the weight; twenty narrow loops fragment the data layer and create the schema-reconciliation problem we are trying to solve. Four is the architecture that ships, scales, and matches how mid-tier operators actually staff.

Do all four loops run on the same data layer?

Yes. WorkSync's Data Hub reads from your existing SCADA, ERP, CMMS, GIS, and historian read-only and produces a normalized, reconciled data layer that all four loops draw from. The reconciliation agent does the schema work once. Each loop scores against its own model and ranks against its own constraints, but the underlying data is the same. This is why integrating two ops stacks during M&A (Devon-Coterra, SM-Civitas) is fast: integrate the data layer once, all four loops light up.

How is this different from "agentic AI" as everyone else uses it?

Most "agentic AI" pitches in oil and gas are shipping the agent without the QA layer underneath. The agent looks impressive in a demo. Six months later it has drifted, nobody knows when, and the operator is back to alarms and spreadsheets. The QA layer (constraint enforcement, confidence-aware routing, drift detection, outcome feedback, audit trail, honest hand-off) is what makes agentic AI a system the operator can defend in front of a CFO, a CISO, an HSE Director, and a regulator. Without it, "agentic" is a euphemism for "automation that fails silently."

Can I deploy one loop at a time?

Yes, and most mid-tier operators do. The standard pattern is: integrate the data layer with Data Hub (free for the integration phase), then activate Loop 1 (Operations) for the fastest cash-flow lift, then layer in Loop 4 (Preventative Maintenance) which shares most of the data sources, then Loop 3 (Safety) once the work-loop discipline is in place, then Loop 2 (Engineering) when an engineering project (hydraulic-model rebuild, M&A integration, completion redesign) creates the trigger. Customers who try to ship all four at once usually slow down. Customers who ship one at a time hit value in 30 days.

Where do I read more on each loop?

Loop 1 (Operations) runs in WellOPS: Work Engine, Route Optimizer, Field Data Capture. Read /wellops or Chapter 6 of the Ultimate Guide. Loop 2 (Engineering) runs in FlowSync: Model Builder, Flow Simulator, Process Simulator. Read /flowsync or Chapter 7 of the guide. Loop 3 (Safety) runs in WellOPS Field Work Management. Read /wellops/field-work-management or Chapter 9 of the guide. Loop 4 (Maintenance) runs in WellOPS Anomaly Detection + Predictive Maintenance + Field Work Management. Read /capabilities/anomaly-detection and /capabilities/predictive-maintenance. All four loops share one Data Hub and one QA discipline.

What is WellOPS and what is FlowSync?

WorkSync ships two products on one Data Hub. WellOPS is the field-operations product: it owns Loop 1 (Operations), Loop 3 (Safety), and Loop 4 (Preventative Maintenance), with Willie as the AI field agent. WellOPS modules are Work Engine, Route Optimizer, Field Data Capture, and Field Work Management. FlowSync is the engineering product: it owns Loop 2 (Automated Engineering), with Taylor as the AI engineer agent. FlowSync modules are Model Builder, Flow Simulator, and Process Simulator. Both run on the WorkSync Data Hub, the read-only integration layer that connects to your existing SCADA, ERP, CMMS, GIS, and historian.

How does this compare to Baker Hughes Leucipa, SLB Tela, or AspenTech ASI?

Each of those vendors competes on one or two of the four loops. Leucipa is strongest on Loop 1 (Operations) and Loop 4 (ESP-specific Maintenance). SLB Tela is strongest on Loop 2 (Engineering, especially subsurface) and parts of Loop 1. AspenTech ASI is Loop 2 (subsurface engineering). WorkSync ships WellOPS for Loops 1, 3, and 4 and FlowSync for Loop 2 on a unified Data Hub with a unified QA discipline, sized for mid-tier US upstream operators (500-5,000 wells). The full head-to-head against each vendor is in the comparison pages linked below.

Activate one loop, then the next

Land FREE with Data Hub. Light up Loop 1 in 30 days. Add the rest as the value compounds.

The data layer integrates once. The four loops light up in sequence. The QA discipline runs underneath all of them. Most mid-tier operators activate Loop 1 first for the cash-flow lift, then add Loop 4 (predictive maintenance) inside the same ranked queue, then layer in Safety and Engineering as the work creates the trigger.

24-hour reply · 4-week scope + pricing · below VP signing authority on the entry tier