S-Curves, Slowdowns and Surprises

An update with futurist and psychologist Graham Norris on how organisations and individuals should think about rapid change.

A focus on AI’s transition from hype to deployment, the behavioural risks of over-reliance, and how firms can maintain judgement and differentiation as AI becomes normalised.

Find out more about Graham and Foresight Psychology -> Here.

Key Take Aways

  • Technological change continues to accelerate overall, with multiple “S-curves” still in steep phases.
  • AI platform capability improvement appears to be levelling off, shifting attention from hype to practical deployment.
  • Delivering “useful” AI in production remains difficult; several expected job displacements have not materialised at pace.
  • The perceived fear around AI is waning as tools become normalised in everyday products and workflows.
  • Highest-impact AI adoption often sits in “unnoticed” operational use cases (e.g., data cleansing, transcription, analysis, content generation).
  • Financial services is positioned as relatively cautious, favouring safer, narrower applications.
  • Over-reliance risk is material: users can defer too readily to AI, reducing critical thinking and increasing “automation complacency”.
  • Governance needs to focus on clear role assignment: what AI does, what humans verify, and how accountability is retained.
  • Human–AI interaction design can create a false sense of “human-like” reliability; this is a design choice with associated risks.
  • If firms use the same AI for decisioning, outcomes may converge, creating more homogeneous, commoditised industries.
  • Competitive advantage is positioned as remaining with human creativity, particularly where models steer towards “safe” average outputs.
  • Taking a longer-term perspective (beyond headlines) is presented as the practical antidote to short-term fear and uncertainty, echoing the dotcom cycle.
See also  Accounts Receivable and Economic Challenges: Reactions from Shared Services Functions

Innovatation

  • Treat AI as “narrow task automation” first: constrain data, scope, and instructions to reduce hallucination risk and increase usefulness.
  • Build explicit human-in-the-loop checks that mirror peer review between humans (process, criteria, and auditability).
  • Design decisioning to avoid “industry mean reversion” by deliberately preserving space for human creativity and differentiated judgement.
  • Use AI to scan cross-market signals (including local-language sources) to triangulate uncertainty themes across geographies.
  • Persist through the post-hype downturn to capture the practical opportunity window (dotcom analogy).
  • Encourage scenario-led, opinion-driven planning: forming a view of the future creates direction and supports continuous updating.
  • Explore “multiple AI perspectives” rather than one generic model response, to generate a wider option set for decision-making.

Key Statistics

  • 99% vs 1% example used to highlight automation complacency risk (rare edge cases despite high average accuracy).
  • “Only 20%” figure referenced in relation to AI projects not reaching completion.
  • “15-minute meetings” cited as an example of perceived acceleration and pressure of modern work cadence.
  • “4 to 5” model change referenced as materially altering user experience/relationship with an AI system.
  • “Ten, 20 years later” referenced to describe delayed realisation of value post dotcom bubble.
  • “1960s” referenced as the era of Star Trek, used to illustrate how imagined futures can shape real innovation trajectories.

Key Discussion Points

  • Where AI capability is genuinely levelling off versus where adoption is only beginning.
  • Why “getting AI over the line” operationally is harder than building prototypes.
  • Which tasks are likely to be automated in the near term, and why predicting this remains difficult.
  • Managing the behavioural risk of over-trusting AI outputs, particularly for rare but high-impact errors.
  • Preventing cognitive laziness and suspended critical thinking when AI is present in decision workflows.
  • How to define and enforce human accountability alongside AI augmentation.
  • Whether human relationships with “human-like” interfaces will continue to deepen, and what risks mirror prior social-media harms.
  • The risk of industry-wide homogenisation if organisations automate decisions using the same underlying AI tools.
  • The role of human creativity as the differentiator when AI outputs trend towards safe, average answers.
  • The value of narrowing AI scope (tight datasets and rules) versus using general conversational systems for ambiguous tasks.
  • How hype cycles (AI and dotcom) can obscure the real opportunity, which may sit after expectations reset.
  • Why adopting a longer-term view and maintaining informed “opinions about the future” supports calmer decisions amid uncertainty.
See also  Endings and Beginnings - a cycle for positive change

#ForesightPsychology


RO-AR insider newsletter

Receive notifications of new RO-AR content notifications: Also subscribe here - unsubscribe anytime

Tags: