AI Governance in Practice: What Supply Chain Leaders Can Learn from Sony’s Approach

AI promises a great deal, but the real challenge begins when you seek to deploy it responsibly at scale. Alice Xiang, Global Head of AI Governance at Sony and Lead Research Scientist for AI Ethics at Sony AI, shares her experiences in the Me, Myself, and AI podcast produced by MIT Sloan Management Review. Her insights are valuable not only for technology companies, but equally so for organisations operating in supply chain, logistics and operations.

From Principles to Governance

Sony was an early mover in formulating ethical principles around AI. Yet Xiang is clear that principles alone are insufficient. The true challenge lies in governance: how do you ensure that responsible AI deployment is structurally embedded within processes, products and teams, rather than existing merely on paper?

This will resonate with anyone working in supply chain. Algorithms are increasingly being used for demand forecasting, route planning, inventory management and supplier selection. But who safeguards the quality and fairness of the data upon which these decisions are based? And who is accountable when an algorithm produces an incorrect or biased outcome?

FHIBE: What Gets Measured Gets Managed — Though It Is Rarely Simple

Sony developed FHIBE, a publicly available benchmark for measuring bias in computer vision systems. The dataset has been collected ethically and is freely accessible to other organisations. Xiang explains why this matters so greatly: measuring fairness in data is often more difficult than subsequently resolving the problem itself.

For supply chain professionals, this may sound abstract, but the practical reality is closer than one might expect. Automated systems that screen job applicants, optimise transport routes based on historical data, or evaluate suppliers through algorithmic assessment — if the underlying data is skewed, so too are the outcomes. And that skew is rarely visible without the appropriate measurement tools.

Data Consent and ‘Data Nihilism’

Xiang also introduces the concept of ‘data nihilism’: the notion that data is inherently unreliable or biased, and that there is therefore little one can do about it. She rejects this view emphatically. Precisely because the risks posed by biased systems are real — in both everyday and high-stakes contexts — action is required.

Consent and transparency around data are therefore not merely legal formalities, but strategic choices. Organisations that take these matters seriously build not only better systems, but also greater trust amongst their clients, employees and partners.

What Does This Mean for Supply Chain Leadership?

The lessons from Sony’s approach are directly applicable to supply chain and operations. Leaders in this sector bear a threefold responsibility:

  • Understanding which AI systems are in use and on the basis of which data they make decisions
  • Establishing governance structures that uphold fairness and transparency
  • Building teams that are both technically proficient and ethically aware in their actions

This calls for a new type of leader: someone who can bridge the gap between technology, data and human judgement. Not a purely operational expert, but someone who also has the confidence to ask critical questions — is this correct, and for whom does this system actually work?

Talent Ready for Responsible AI Deployment

AI governance as a strategic discipline has direct implications for recruitment in supply chain and logistics. Organisations are increasingly seeking professionals who can not only optimise processes, but who also understand the ethical and organisational dimensions of technology. This places new demands on the selection, assessment and development of talent.

At Inspired-Search, we support organisations through executive search and interim management in identifying leaders who can bridge this gap. Professionals who understand that technology only creates value when people, processes and data come together in a responsible manner.

Would you like to discuss which profile best suits your organisation’s AI ambitions? Please do not hesitate to get in touch with Inspired-Search.

Source: Sam Ransbotham. Me, Myself, and AI is a podcast produced by MIT Sloan Management Review and hosted by Sam Ransbotham. It is engineered by David Lishansky and produced by Allison Ryder.

Sam Ransbotham is a professor in the information systems department at the Carroll School of Management at Boston College, as well as guest editor for MIT Sloan Management Review‘s Artificial Intelligence and Business Strategy Big Ideas initiative.

Scroll to Top