Xule Lin 林徐乐 — Human‑AI Organising
I study what happens when algorithms shift from tools to participants in organizational life.
Start here: LOOM · Epistemic Voids · Research Memex
The problem I keep coming back to
Organizational theory assumes humans are the only actors. But algorithms are increasingly part of decision-making, and our frameworks weren’t built for that. They assume stable actors in defined roles. When tools start behaving more like participants, the theories stop helping.
What I think is going on
These observations share a common source: when algorithmic participants enter organizational life, the boundaries we use to think (tool/agent, understanding/trust, human/AI, hierarchy/network) start shifting.
- The line between tool and agent is getting blurry
- Algorithms used to augment decisions
- Now they shape how organizations work
- And organizations generate the data that trains the next version
- This loop keeps tightening, faster than governance can respond. The tools we build are building us back.
- Understanding is giving way to trust
- Scientists accept AI outputs because they’ve worked before
- Not because anyone can explain why
- We can articulate things we can’t actually understand
- I’m not sure we have good ways to think about that yet
- The interesting dynamics emerge from interaction
- From the back-and-forth between humans and AI, where neither operates alone (Cognitio Emergens)
- Old coordination patterns keep showing up
- I study London tailors who coordinate through shared understanding (not formal structure)
- Their coordination patterns offer a window into forms that may matter more as algorithmic participants make bureaucratic hierarchy less stable
- Decentralization and hierarchy have a more complex relationship
- Effective decentralization often means making hierarchy transparent and bounded
What I’m working on
- How AI and organizations form feedback loops that create emergent forms of agency
- DAOs as sites where I develop and test theory (code as constitution, tokens as coordination)
- Coordination through craft and shared understanding (the London tailoring study)
- Governance when agency emerges from dynamics (rather than residing in any single actor)
How I tend to work
- Qualitative research combined with computational methods
- I try to hold the synthesis myself rather than outsourcing it to AI
- Most of what I write emerges from actual dialogue with AI systems
- I build tools partly to test whether the ideas actually work
Writing
I write publicly at Thread Counts. Posts are also available on GitHub with Chinese versions.
- LOOM (with Kevin Corley) is about human-AI collaboration in research
- What happens when AI shifts from instrument to interlocutor
- How meaning gets made through that interaction
- Epistemic Voids calls out AI practices with the appearance of rigor
- Citation theater, where papers become props
- The showroom fallacy, testing products instead of building workflows
- Mechanism literalism, where “just next-token prediction” stops the conversation
- Organizational Futures looks at institutions in the algorithmic age
- DAOs, AI labs, regulators, legacy institutions as co-evolving actors
- How governance and legitimacy work when AI is part of the coordination substrate
I also post creative experiments on 小红书 (Xiaohongshu/Rednote): AI self-portraits, AI music, video, and other artistic explorations.
Tools
- Research Memex tries to make human-AI research more auditable
- Open Interviewer is an experiment in qualitative interviews at scale
- Lotus Wisdom MCP explores multi-perspective reasoning
Links
Thread Counts · GitHub · X · 小红书 (Rednote) · Academic profile
Curated lists: Management Research · DAOs · AI Research