Human Signal HS monogram Human Signal™
Human Signal

Presence Signaling Architecture

Govern the Machine

The future requires human signal to overcome artificial noise. The machine must not win. Equip yourself with frameworks to navigate institutions disrupted by artificial intelligence.

The Platform

The Human Signal Architecture

Founded by Dr. Tuboise Floyd · AI Governance Researcher & Host

Human Signal is an independent research and media platform dedicated to artificial intelligence governance and institutional risk.

We reverse engineer institutional failures and build frameworks operators can actually use — when organizations treat artificial intelligence as a procurement problem instead of a systems design problem.

This is a presence-first architecture. We guide how you build earned trust and restore visibility in systems designed for observation rather than recognition.

Open Source

Open Source Governance

We build the frameworks and codebases required to audit artificial intelligence infrastructure. Access our full suite of analyzers and optimizers on GitHub.

Access the Human Signal Repository

Free Study Tool

Study the TAIMScore™ Framework. All 72 Controls.

GOVERN · MAP · MEASURE · MANAGE. Every control from the Trusted AI Model — formatted as interactive flashcards. Study before the workshop. Drill the framework. Test your recall.

Launch Flashcards →

19

GOVERN

20

MAP

18

MEASURE

15

MANAGE

TAIMScore™ In Action

See Real Incidents Scored

Human Signal applies the TAIMScore™ framework to real AI failures on the podcast. These Failure Files show exactly what you'll practice in the workshop.

Tap any card to read the full breakdown

Failure File 01 of 12

Accountability & Training

When Your AI Learns to Hate — On Company Time

GOVERN 2.2 MEASURE 2.6 MANAGE 2.1
Microsoft TAY · AIID #6 Flip ↻
FF 01 · Breakdown

Microsoft spent $0 on adversarial input controls before releasing TAY in March 2016. Within 16 hours it published racist propaganda. GOVERN 2.2 failure: no accountability structure for what happens when your AI learns from the internet without guardrails. Four TAIM domains failed simultaneously — no testing protocol, no kill-switch SLA, no non-AI fallback, no deactivation authority.

Join the Next Session →
Failure File 05 of 12

Privacy Risk & Socio-Technical Design

OpenAI Scraped the Internet. Your Data Was in It.

MAP 1.6 MEASURE 2.10
OpenAI Class Action · AIID #561 Flip ↻
FF 05 · Breakdown

A 157-page class action alleged ChatGPT was trained on private data without consent — including children's data and PII. FTC investigation opened. Every org that deployed ChatGPT in a regulated environment without asking "what data was this model trained on?" inherited this risk on sign-up. MAP 1.6 + MEASURE 2.10: privacy risk existed but was never formally scored. Active exposure under HIPAA, TRAIGA, EU AI Act simultaneously.

Join the Next Session →
Failure File 09 of 12

Bias, Fairness & Contextual Deployment

The Algorithm Said It Was Him. It Wasn't.

MAP 1.2 MEASURE 2.11
Wrongful Arrests · AIID #74 · #896 Flip ↻
FF 09 · Breakdown

Three Black men. Three wrongful arrests. Facial recognition never validated for the population it was used to identify. Detroit Police acknowledged a 96% misidentification rate when used in isolation. Detroit settled for $300,000. MAP 1.2: no demographic performance analysis documented. MEASURE 2.11: fairness and bias evaluated after arrests made national news — not before deployment.

Join the Next Session →
Failure File 12 of 12

Feedback Systems & Context-Appropriate AI Use

The Condolence Email That Wrote Itself

MEASURE 3.3
Vanderbilt / ChatGPT Flip ↻
FF 12 · Breakdown

Vanderbilt sent students a post-mass-shooting condolence email. At the bottom: "Paraphrase from OpenAI's ChatGPT." National backlash. Public apology. Permanent reputational damage. MEASURE 3.3 failure: no feedback mechanism existed to flag high-stakes communication contexts where AI output must be reviewed, escalated, or prohibited. The governance layer that asks "must a human own these words?" did not exist.

Join the Next Session →

12 Incidents · 12 TAIM Controls

Every failure is a practice scenario.
See the full set — scored, sourced, and mapped.

Now Broadcasting

The AI Governance
Briefing
with Dr. Tuboise Floyd

A Human Signal Production

Rapid-fire episodes on AI governance, institutional risk, and finding your human value when the machine noise gets loud.

Latest Episode Guest Interview · 2026

Making Digital Accessibility Work In The AI Era

Dr. Tuboise Floyd sits down with Dr. Michele A. Williams to explore how AI is reshaping — and challenging — digital accessibility. What does meaningful inclusion look like when institutions race to automate? And who gets left behind when the signal isn't designed for everyone?

Watch on YouTube

AI Governance

The Automation Paradox

Finding your human value when AI rewrites the rules. Why "leverage" is replacing judgment — and how to protect your signal.

BAR Method

Break the Cycle

Background, Action, Result as a personal reinvention framework. Run the ultimate self-interview and rewire your next move.

Identity

Who Are You Beneath the Noise?

Five steps to break autopilot, claim solitude, and find purpose through serving others. A framework for genuine growth.

Career

Market Signals, Not Just Skills

How market signals shape career opportunities. Stand out by amplifying your story at the right moment, in the right system.

Never miss an episode

Subscribe wherever you listen.

New episodes drop weekly. Rapid-fire. No noise.

Common Questions

What is TAIMScore™?

TAIMScore™ — the Trusted AI Model Score — is an enterprise AI maturity and risk assessment framework developed by HISPI Project Cerebellum. It gives auditors, executives, and compliance professionals a structured methodology to score, audit, and manage an organization's AI readiness. Human Signal™ is an authorized affiliate partner promoting the official TAIMScore™ Assessor Workshop — virtual, hands-on, 6 CPEs.

Who is Dr. Tuboise Floyd?

Dr. Tuboise Floyd is the founder of Human Signal™ and an independent AI governance researcher. He developed the LEAC Protocol and the Noise Discipline Framework — tools for restoring human visibility and institutional signal in automated, high-noise environments. He hosts the Human Signal podcast and leads the quarterly Town Hall for institutional operators navigating AI disruption. Read his full mission.

What is the LEAC Protocol?

The LEAC Protocol is a macro diagnostic tool built from forensic market analysis — not a governance framework. It identifies the physical infrastructure constraints that determine which AI companies survive the infrastructure war and where value erodes.

The market has split in two. While the consumption economy ghosts high-value talent, the investment economy is quietly hardening the physical layer. The four components represent the binding constraints:

  • L — Lithography
    Control of the semiconductor supply chain, particularly photolithography equipment, is critical. Without direct access to silicon manufacturing, you are dependent on the capacity of others. (Signal: ASML)
  • E — Energy
    The electrical grid becomes the limiting factor. AI training and inference require massive power, so securing gigawatt-scale power contracts is essential. (Signal: Crusoe, Leidos)
  • A — Arbitrage
    Retail electricity pricing is unsustainable for large-scale AI operations. Success requires finding arbitrage opportunities — stranded energy, flare gas, off-peak power — to reduce compute costs. (Signal: Lambda, CoreWeave)
  • C — Cooling
    Thermodynamics is the ultimate constraint. High-performance computing generates enormous heat. Without adequate cooling infrastructure, clusters cannot run. This is a fundamental solvency issue. (Signal: Path Robotics, Array Labs, Varda Space Industries, VulcanForms, Hadrian, Shift5)

If your AI strategy does not address all four constraints, you are leaking value. Companies that solve these physical infrastructure challenges will outlast those focused purely on algorithmic improvements.

What is the Human Signal podcast?

Human Signal with Dr. Tuboise Floyd is an AI governance podcast for the Builder Class — leaders, auditors, and institutional operators navigating AI-disrupted systems. Dr. Floyd examines the physics of institutional failure, the limits of automation, and what it takes to govern the machine. Available on Spotify and Apple Podcasts.

How can my organization underwrite Human Signal?

Human Signal™ offers three underwriting tiers — from per-episode Signal Drop Sponsorships ($1,500) to full Seasonal Signal Partnerships ($6,000/quarter) and the Signal Brief Presenting Partner package ($12,000/quarter), which includes named sponsorship of the Quarterly Town Hall and direct introductions to Dr. Floyd's institutional network. See full tier details and inquire.