PROJECT IRIS Research Portal · Technical Track

PROJECT IRIS

A reproducibility-first research stack for mechanistic biology simulation and closed-loop model discovery. Current focus: SCNT feasibility modeling (CFM v0.40), multi-organ PK/PD virtual physiology (BSE), and iterative equation refinement via NSCS.

Status: Active Development Station: Sarnia-01 Standard: MSRR-Compliant

Active Research Simulators

Production-calibrated modeling + prototype physiology engine (mechanistic-first).
PRODUCTION · CFM v0.40 Domain: SCNT feasibility

Cloning Feasibility Model (CFM)

Probabilistic gate-based model of SCNT (Somatic Cell Nuclear Transfer) that formalizes end-to-end cloning feasibility as a composition of biological stage constraints with uncertainty quantification.

Benchmark Method
Monte Carlo (2,000 draws)
Parameter vectors sampled from evidence.csv → outcome distribution → P05/P50/P95 bands.
Current Calibration
0.57% mean success probability
Scientifically significant baseline because it matches historically constrained feasibility (not optimistic priors).
  • Gate structure: SCNT decomposed into stage-wise gates (e.g., reprogramming, cleavage, implantation, placental development) to expose where variance accumulates.
  • Sensitivity layer: Computes gate–outcome associations across draws to identify the strongest bottlenecks (targets for intervention and model refinement).
  • Outputs: Full outcome distribution + uncertainty bands (P05/P50/P95) suitable for reporting and regression tests across versions.
Why 0.57% matters: It’s a calibration anchor that demonstrates the model is constrained by realistic failure rates. This makes downstream improvements (new gates, stronger priors, intervention models) measurable and falsifiable.
LIVE · BSE DEMO Model: Propofol (Marsh 1991)

BSE Interactive PK Demo

Three-compartment mechanistic PK model solved via RK4 integration, parameterized from literature (Marsh model). Interactive dosing + infusion controls with live concentration curves and metrics.

Embedded Simulator Local path: /demo/bse-propofol/
PROTOTYPE · VIRTUAL PHYSIOLOGY Domain: Multi-organ PK/PD

Bio-Simulation Engine (BSE)

Mechanistic “Virtual Physiology” engine modeling organs as coupled dynamical systems (compartments with explicit state variables) rather than black-box predictors.

  • Architecture: Multi-compartment PK/PD with explicit state evolution per organ (concentration, binding fraction, immune activity, toxicity thresholds).
  • Coupling: Inter-organ flows model distribution and exchange (circulation-driven transport + organ-specific transformation/clearance).
  • ADME backbone: Absorption, distribution, metabolism, excretion encoded as parameterized transfer/clearance operators across compartments.
  • Modularity: Organ plug-ins enable incremental scope growth (add organs, immune overlays, toxicity models) without rewriting the core solver.
  • Current experiment: Rabbit → Human scaling tests using physiology/allometric transformations of organ volumes, flow rates, and clearance constants to evaluate cross-species generalizability.
Rabbit → Human test goal: preserve mechanistic structure while transforming parameters; compare predicted dynamics to known curve shapes/constraints to validate portability of the simulator.

Discovery & Optimization

Closed-loop equation search + residual stabilization to improve mechanistic simulators.
LOOP · NSCS Mode: Mechanistic + Residual

Neural Self-Consistent System (NSCS)

Closed-loop model discovery framework that ingests time-series trajectories, proposes symbolic candidates, rejects unstable forms, and applies neural residual correction to improve stability and predictive fidelity.

Input
Time-series trajectories
Simulator outputs (e.g., BSE organ trajectories) or experimental curves.
Output
Selected governing form
Best candidate equation + residual term (if needed) + audit trail.
  • 1) Ingest: read trajectories (states over time), normalize channels, define target variables and candidate feature library.
  • 2) Symbolic candidate generation: generate ODE candidates via constrained symbolic regression / structured search (candidate families, parameter sweeps).
  • 3) Score + reject: evaluate candidates on fit + stability (e.g., exploding/oscillatory divergence, nonphysical trends) and discard unstable solutions.
  • 4) Residual correction: train a neural residual model Δ(t, x) to learn the mismatch between mechanistic candidate predictions and observed trajectories.
  • 5) Compose: final model = mechanistic form + residual correction (only if it improves stability/generalization) with versioned metrics.
  • 6) Feedback: inject improved equation/parameters back into BSE to iteratively refine the simulator under the same evaluation protocol.
Why residuals: preserves interpretability of the mechanistic backbone while absorbing unmodeled dynamics (measurement artifacts, omitted nonlinearities, hidden coupling terms) in a controlled, testable way.

FRB Signal Analysis

Burst classification pipeline with reproducible run ledgers and figures.
PIPELINE · FRB v0.10 Mode: Time-series classification

Fast Radio Burst Scoring & Evaluation

A reproducibility-first pipeline that ingests FRB catalogs and derived features, produces event scores, and logs evaluation metrics per run.

  • Ingest: standardized catalog tables with version tags.
  • Feature layer: derived burst features and normalization.
  • Model layer: per-event scoring with versioned outputs.
  • Evaluation: AUC, PR-AUC, F1, and threshold sweeps.
  • Run ledger: each experiment recorded as a row in a CSV.
Note: If your FRB figures aren’t present yet, the portal can still ship — just add your PNGs later under /assets/figures/.

Data & Downloads

Artifacts for replication: datasets, run ledgers, and portal bundles.
ARTIFACTS · PUBLIC Format: CSV / PNG / ZIP

Downloadable Artifacts

Versioned outputs from experiments. Update the run ledger first, then upload matching figures and optional ZIP bundles.

  • Run ledger fields: run_id, date, model, seed, AUC, PRAUC, F1, notes.
  • Portal pack: latest ledger + figures + short summary README.

Laboratory Infrastructure

Local execution for iteration speed + optional Colab for portable benchmarking.
LOCAL WORKSTATION

Compute Environment

Primary local system used for Monte Carlo benchmarks, ODE integration, and GPU-accelerated neural residual training. Enables rapid iteration without cloud dependency.

CPUAMD Ryzen 5 5500
GPUNVIDIA RTX 3050
RAM32 GB DDR4
OSWindows 11
ROLE IN PIPELINE

Operational Advantages

  • Iteration speed: same-day tuning of priors, gates, and physiology parameters.
  • GPU acceleration: faster neural residual training for NSCS stability correction.
  • Reproducibility: consistent local environment for versioned benchmarks and regression tests.
  • Portability: Colab notebooks can mirror benchmark runs for shareable scientific logs.
Engineering posture: local-first development + notebook-based logs for transparency and replication.