Encode

Intelligibility

⊢X:Typex:X⊢A(x):TypeāŠ¢āˆ‘x:XA(x):Type

Where ⊢ means from from this assumption
and

premisesconclusion

in the premise there is no initial assumption.
Contrast this with the way that the turnstile changes direction in ΑΒΓΔ.XYZ.

My work relates to ARIA’s Mathematics for Safe AI and Safeguarded AI programmes, focusing on the epistemic and communal infrastructures around powerful models.
I do not develop safety cases, interpretability tools, or training pipelines.
I am building neurosymbolic architecture.
I will utilise developments in the Agentic AI arena such as Nemoclaw, Agent Toolkit and small language models. These reference Nvidia, but other models will be compared.
Agents act as untrusted proposal generators around a formally specified, co‑constructive logic kernel with explicit provenance and experimental, optional cryptographic anchoring. This is intended to give research communities, here psychoanalysts, AI‑safety practitioners, and software engineers, a way to represent and revise their claims, norms, and disagreements as evolving logical structures, adding tools for transparent, auditable reasoning in scientific and clinical practice.

AI assistants agents are treated as untrusted proposal generators wrapped around a formally specified proof engine.
The typed sequent‑calculus core is a co‑constructive proof/refutation kernel that accepts only structured inputs and produces checkable proof objects, while all free‑text interaction is mediated by an ā€œagentic edgeā€ that can suggest but never singularly ratify inferences.
A provenance layer records formulas, sequents, proofs, and their relations as a graph of ā€œsupports/refutes/alternativeā€ links back to source documents, enabling re‑checking any reasoning step. Each agent has its own internal logic and interface, and interactions between agents become compositional transformations between local theories.
At early stage this could be implemented as an Obsidian‑integrated research tool.

The aim of internal provenance is to understand which artefacts in AI‑assisted inquiry benefit from being tamper‑evident, and which must remain fluid and revisable, operating at scale with corpora and compute budgets that AI‑for‑Science programmes enable.
Substantial compute would enable two forms of stress‑test.

  1. Running large‑scale experiments over evolving bodies of work—such as AI‑safety literature or software artefacts—to see how formal guarantees and cryptographic commitments can constrain what AI tools ā€œclaimā€.
  2. Employing methods like those in Karpathy's Autoresearch (Claude.md manifest) that iterate over alternative engine and prompt configurations, defining success for experimental purposes.

I come to AI from psychoanalysis, with three decades shaping how I think about language, change, and responsibility.
I was also an OO specialist Java engineer for about a decade, including a year teaching at Greenwich University.

It is ethically unacceptable to use clinical material as data.
Broadly, I will:
Focus on public sources: published case write‑ups, debates, and consensual lab studies.
Core material is David Corfield's Modal Homotopy Type Theory, James Trafford's Meaning in Dialogue and their references, while embracing pioneers like Karl Friston.
Ingest core literature, as possible.
Develop intelligibility tools for users.

Model expert communities as evolving ā€œlocal topoiā€, where each group has its own internal theory and norms, and interactions between groups become morphisms between those structures.
Then study how concepts, norms, and disagreements move across communities over time, while still respecting necessary ethical and epistemic limits.
Reasoning patterns (support, refutation, alternative hypotheses, resource trade‑offs) provide intelligibility for communities without collapsing their practices into crude metrics.

Study communities: modelled as networks of agents whose claims, challenges, and revision is a pattern of support and refutation.
The underpinning is category theory, social choice, game and information theory.
Evolving logical or topos‑like structures with lifetimes that correspond to periods of practical cohesion, can be deprecated when assumptions cease to hold.

Access substantial compute and large corpora, the ARIA Mathematics for Safe AI and Safeguarded AI material, neighbouring AI‑safety literature, and selected clinical/scientific domains.

The architecture maps how notions of risk, alignment, harm and responsibility are introduced, contested, and stabilised across communities.
This connects model‑level safety work to the higher‑level question of how societies and expert groups actually reason about, and update processes and beliefs, in the context of powerful AI system use.