The QC reported this morning on Mid Staffordshire hospital, which managed to kill hundreds of patients between 2005 and 2008 (and probably since) through negligence.
I hope the learned gentleman had economists on his public enquiry team, because the disaster encapsulated the results of misaligned incentives and unintended consequences.
The management were too busy hitting financial and throughput targets. Success in meeting the regulatory requirements was optimised by abusing the patients.
The example here represents a very generic problem and I think the best framework for analysis actually comes from Artificial Intelligence.
There are two kinds of systems (agents in AI-speak). Those which are intrinsically good (which we like and admire) and bad systems, which we can't trust and which we therefore try to hedge around with regulation.
In AI terms, we'd think of this in terms of an agent in its environment. The environment defines certain beliefs, practices and goals as 'good' and the agent is judged by whether its own beliefs, practices and goals are aligned. Many natural and artificial systems are examples of this general paradigm.
(Notice 'good' and 'bad' are never absolutes but are always relativised to the nature of the environment: this is how to think about ethics.)
In evolution 'good' organisms are adapted to their niches; 'bad' ones are unfit and get culled.
In a capitalist economy, in a competitive market, some companies find their niches and flourish, others go bust.
In both these examples, it's difficult to specify the niche (and therefore to formally specify what counts as 'good') so we throw variation at the niche and see what works: failures are terminated.
Competition and choice: there are public policy lessons here! Most notably regulation is always second best, rarely works well and often doesn't work at all. And more is often less.