Posts Tagged with "thinking-in-systems"
Reference #310: Thinking in Systems
A central insight of systems theory is that systems cause their own behaviour. An outside event may trigger that behaviour, but the same event applied to a different system is likely to produce a different result.Read more →
Reference #311: Thinking in Systems
The behaviour of a system cannot be deduced from its elements alone.Read more →
Reference #312: Thinking in Systems
A system is an interconnected set of elements coherently organised in a way that achieves something. It comprises the following components: elements, interconnections, and a function or purpose.Read more →
Reference #313: Thinking in Systems
There exists many systems, and systems can be embedded in other systems. But not everything is a system. For example, sand scattered on the road by chance has no interconnections nor function.Read more →
Reference #314: Thinking in Systems
Systems are resilient. Many are evolutionary.Read more →
Reference #315: Thinking in Systems
While often visible, elements of a system can be intangible. Examples include school pride and academic prowess in a university.Read more →
Reference #316: Thinking in Systems
Many interconnections in a system are flows of information. This information is used at decision or action points within a system, such as a consumer deciding what to buy based on her income, savings, and goods available for purchase.Read more →
Reference #317: Thinking in Systems
Purposes are best deduced from behaviour. Behaviour, not stated goals, reveals a system's purpose.Read more →
Reference #318: Thinking in Systems
Each of the components of a system — elements, interconnections, and functions or purpose — are essential to a system, so none are more or less important to that system. Yet there exists a hierarchy of importance in driving behaviour.Read more →
Reference #319: Thinking in Systems
In a system, a stock is a store of quantity, or an accumulation of some resource. This may be physical, such as water in a bathtub, or intangible, such as motivation or collective knowledge.Read more →
Reference #320: Thinking in Systems
Dynamic equilibrium occurs when the inflow and outflow of a system are equal yet non-zero; the level of the stock remains constant.Read more →
Reference #321: Thinking in Systems
All models, whether mental models or mathematical models, are simplifications of the real world.Read more →
Reference #322: Thinking in Systems
We tend to focus more easily on stocks than flows, and on inflows more easily on outflows. This means we often overlook that a stock can be increased not only but increasing its inflow but by decreasing its outflow.Read more →
Reference #323: Thinking in Systems
A key to understanding system behaviour is recognising that a stock takes time to change. Stocks respond to changes in flow gradually; the is especially for large stocks. Stocks act as delays or as buffers in a system.Read more →
Reference #324: Thinking in Systems
Stocks allow for independent and decoupled inflows and outflows. This can provide certainty, continuity, and predictability of many systems even when flows vary in the short term.Read more →
Reference #325: Thinking in Systems
Most decisions — individual and institutional — are designed to regulate the levels of stocks. In that way, the world can be seen as a collection of stocks and mechanisms for changing them by manipulating flows. It is a collection of feedback processes.Read more →
Reference #326: Thinking in Systems
A feedback loop is a mechanism by which the behaviour of a system persists over time. When changes in a stock affect the flows into or out of that stock, a feedback loop is formed.Read more →
Reference #327: Thinking in Systems
A balancing feedback loop is one that stabilises the stock level at a given value of range of values. It is goal-seeking, stability seeking.Read more →
Reference #328: Thinking in Systems
A reinforcing feedback loop is characterised by amplifying and self-multiplying behaviour, creating a vicious or virtuous cycle. It generates more input to a stock the more of that stock there is (and less input the less there is).Read more →
Reference #329: Thinking in Systems
A reinforcing loop occur wherever an element of a system has the ability to grow as a constant fraction of itself. Such behaviour is commonly seen in populations and economies. This leads to reinforcing loops which exhibit exponential growth.Read more →
Reference #330: Thinking in Systems
The time taken for an exponentially growing stock to double in size (the "doubling time") is approximately 70 divided by the growth rate as a percentage. For example, if you put $100 into your bank returning 7% interest per year, you will double your money in 10 years.Read more →
Reference #331: Thinking in Systems
In real systems, unlike simple models, there is a rarely a single feedback loop. Rather there are frequently multiple feedback loops linked together that create complex system behaviour.Read more →
Reference #332: Thinking in Systems
A system will always have delays in responding to information. Information in a feedback loop — for example, the level of a stock — takes a non-zero natural time to feed back into the system. This means it cannot affect current behaviour, only future behaviour.Read more →
Reference #333: Thinking in Systems
Complex systems often have several competing feedback loops operating simultaneously. The relative strength of these feedback loops can shift over time; this leads to shifting dominance of different loops and is a cause of complex behaviour.Read more →
Reference #334: Thinking in Systems
The existence of delays in a balancing loop makes a system likely to oscillate. Consider how a car dealership responds to a 10% increase in sales from increased customer demand.Read more →
Reference #335: Thinking in Systems
Delays are strong drivers of system behaviour. Changing the length of a delay may have a large impact on the system.Read more →
Reference #336: Thinking in Systems
In a finite environment, no physical system can grow forever. All physical, growing systems will eventually face constraints in the form of a balancing loop. These systems will always have at least one reinforcing loop driving growth and at least one balancing loop constraining growth.Read more →
Reference #337: Thinking in Systems
A system that draws from non-renewable resources is stock-limited. The system can exhibit exponential growth but reaches the limit of the resource quickly — the faster the extraction rate, the shorter the lifetime of the resource.Read more →
Reference #338: Thinking in Systems
A system that draws from renewable resources is flow-limited. It can extract from a resource indefinitely but only at a flow rate equal to or less than the resource's regeneration rate.Read more →
Reference #339: Thinking in Systems
When a resource system is drawn from at a faster rate than it can sustain, it may or may not survive and rebuild itself. There are three possible outcomes:Read more →
Reference #340: Thinking in Systems
Renewable and non-renewable resource systems both limit a physical system from growing indefinitely. However, they apply different constraints. This is due to the difference between the systems' stocks and flows.Read more →
Reference #341: Thinking in Systems
Resilience, self-organisation, and hierarchy are three characteristics of highly functioning systems.Read more →
Reference #342: Thinking in Systems
Resilience is a measure of a system's ability to survive in a changing environment. The opposite of resilience is brittleness or rigidity.Read more →
Reference #343: Thinking in Systems
Stability is not the same as resilience. Constancy can lead to fragile systems.Read more →
Reference #344: Thinking in Systems
Unless you exceed the limits of a system, resilience can be difficult to observe. And so without a full view of the system we often trade off resilience for stability, productivity, or some other more immediately recognisable system property.Read more →
Reference #345: Thinking in Systems
Self-organisation is the capacity of a system to make its own structure more complex. This property is what enables babies to learn, humans to evolve, and large organisations to form.Read more →
Reference #346: Thinking in Systems
Self-organisation is often sacrificed for short-term productivity or stability, such as when humans are treated solely as hands (not heads) for enabling production.Read more →
Reference #347: Thinking in Systems
Simple organising principles lead to diverse self-organising structures.Read more →
Reference #348: Thinking in Systems
Self-organising systems often generate hierarchy. Hierarchy is the arrangement of systems and the subsystems within them. For example, you are a subsystem of your family system, a family is a subsystem of a city, a city is a subsystem of a nation, and so on.Read more →
Reference #349: Thinking in Systems
Complex systems can evolve from simple systems only if there are stable, intermediate forms. This is why hierarchies are so common in natural systems.Read more →
Reference #350: Thinking in Systems
In a hierarchical system, the relationships within a subsystem are stronger and more dense than relationships between subsystems. If the information links between levels are designed correctly, no level is overwhelmed with information, and the system works with efficiency and resilience.Read more →
Reference #351: Thinking in Systems
Sub-optimisation is a system behaviour wherein in a subsystem's goals dominate at the expense of the total system's goals. For example, a team member more focused on his personal glory than the success of the team can cause that team to fail.Read more →
Reference #352: Thinking in Systems
Everything we know is a model. These models are similar to, but not always congruent with, the real world. They fall far short of fully representing our world, which can lead us to make mistakes.Read more →
Reference #353: Thinking in Systems
When encountering a problem, a systems thinker looks at the history of the system, not just at individual events. She seeks to understand not only the "what" but the "why".Read more →
Reference #354: Thinking in Systems
Event-level analysis provides no predictive power. It also gives you no ability to change the behaviour of the system.Read more →
Reference #355: Thinking in Systems
As Gleick wrote in "Chaos: Making a New Science (1987), linear systems are modular — they can be taken apart and edit together. In contrast, non-linear systems generally cannot be added together. Doing so can create unexpected behaviour.Read more →
Reference #356: Thinking in Systems
In a nonlinear system, a change does not produce an effect of proportionate size. For example, the average speed of cars on a freeway is affected only slightly by an increase in the density of traffic.Read more →
Reference #357: Thinking in Systems
Systems rarely have real boundaries. Everything is connected to everything else. Boundaries in system diagrams are convenient simplifications; boundaries in the real world are of word, thought, perception, and social agreement — they are artificial.Read more →
Reference #358: Thinking in Systems
Since there are no separate systems, where you draw a system boundary depends on the questions you want to ask.Read more →
Reference #359: Thinking in Systems
The right boundary for thinking about a problem rarely coincides with an academic (functional) or political boundary.Read more →
Reference #360: Thinking in Systems
The most important input to a system is that which is most limiting, called the "limiting factor". What factor is limiting may change over time.Read more →
Reference #361: Thinking in Systems
Growth in a system enhances or deplete limits. What is limiting is often changed by growth.Read more →
Reference #362: Thinking in Systems
Growth occurs when a factor ceases to be limiting. This growth continues until the next limit is reached, which could be the same or a different factor.Read more →
Reference #363: Thinking in Systems
Perpetual growth is impossible in a finite environment. The choice, then, is to decide what limits to live within.Read more →
Reference #364: Thinking in Systems
Bounded rationality is the human decision-making process whereby we make reasonable decisions based on the limited information we have. This contrast with Adam Smith's view of humans as "homo economicus" who act completely rationally and with perfect information.Read more →
Reference #365: Thinking in Systems
Not only do we act on incomplete information, but we often misinterpret the information we have. We misperceive risk, weight recent actions more heavily than past experience, and ignore information that doesn't fit our mental models, and more.Read more →
Reference #366: Thinking in Systems
Economic theory as derived from Adam Smith rests on two core assumptions, both of which are challenged by bounded rationality. First, the theory assumes that humans are "homo economicus" is who act optimally, entirely rationally, and with complete information.Read more →
Reference #367: Thinking in Systems
The decisions of an individual — that is, their behaviour — often arise from the system they are part of. If you were to replace an individual exhibiting undesirable behaviour with another, it is unlikely the new person will act very differently. This is often attributable to bounded rationality.Read more →
Reference #368: Thinking in Systems
Contrary to Adam Smith's assumption of an "invisible hand", the bounded rationality of each actor in a system may not lead to decisions that benefit the system as a whole. Replacing individuals rarely improves performance.Read more →
Reference #369: Thinking in Systems
Several system archetypes that produce common patterns of behaviour have been identified. These archetypes arise from system structure, not necessarily the specifics of a system, and so are observed in many different settings. The archetypes are as follows:Read more →
Reference #370: Thinking in Systems
Policy resistance (or "fixes that fail") is a system archetype where multiple actors pull the system's stock towards various, competing goals.Read more →
Reference #371: Thinking in Systems
The tragedy of the commons is a system archetype where, by their own bounded rationality, the users of a shared resource — the commons — harvest that resource until it is over-harvested then destroyed. Examples of commons are a shared pasture, a national park, or the global environment.Read more →
Reference #372: Thinking in Systems
There are three approaches to avoiding the tragedy of the commons:Read more →
Reference #373: Thinking in Systems
Reference #374: Thinking in Systems
Escalation is a system archetype wherein the desired state of one stock is continually set above the perceived state of another stock, and vice versa.Read more →
Reference #375: Thinking in Systems
The system archetype of "success to the successful" is where one party can use their wealth, privilege, or information to create more wealth, privilege, or information.Read more →
Reference #376: Thinking in Systems
The game of Monopoly is a classic example of success to the successful. Once a player builds hotels to extract rent from other players, they are able to use that rent money to build more hotels and hence collect more rent.Read more →
Reference #377: Thinking in Systems
Success to the successful in known in the field of ecology as the "competitive exclusive principle". It states that two species cannot live in the same ecological niche competing for the same resources.Read more →
Reference #378: Thinking in Systems
Competition in a market systematically eliminates market competition. This analysis, made by Karl Marx, can be seen to be true in any competitive market or where one previously existed. For example, in the U.S.Read more →
Reference #379: Thinking in Systems
Within the "success to the successful" archetype, not only do the rich get richer but the poor get poorer. For example, often the poorest children receive the worst education in the worst schools. This leads them to develop few marketable skills, allowing them to quality for only low-paying jobs.Read more →
Reference #380: Thinking in Systems
There are several ways to break out of the "success to the successful" trap: diversification, creating balancing feedback loops to keep competitors from taking over entirely, or periodically levelling the playing field.Read more →
Reference #381: Thinking in Systems
The system archetype of shifting the burden to the intervenor arises when a solution to a systemic problem reduces or disguises the symptoms but does nothing to solve the underlying issue. This is also known as dependence or addiction.Read more →
Reference #382: Thinking in Systems
In a "shifting the burden to the intervenor" system, the stock can be physical such as a crop of corn, or meta-physical such a sense of wellbeing or self-worth. The intervention does not need to close the gap your desired state and actual state; instead, it can alter the perceived state.Read more →
Reference #383: Thinking in Systems
A well-meaning intervenor can cause a system to become dependant on them.Read more →
Reference #384: Thinking in Systems
Dependence or addiction makes long-term systemic solutions more difficult.Read more →
Reference #385: Thinking in Systems
To avoid the trip of shifting the burden to the intervenor, be vigilant for policies or practices that relive symptoms, or inhibit signals, without addressing the underlying problem. Focus not on short-term relief but on what long-term changes would improve the system.Read more →
Reference #386: Thinking in Systems
Rule beating (a system archetype) means taking evasive actions to abide by the letter or the law but not its spirit. You circumvent the intent of a system's rules. This can lead to unnatural and harmful behaviours that make no sense if the rules were absent.Read more →
Reference #387: Thinking in Systems
To avoid harmful rule beating, a system's law must be designed with the full, evolving system in mind. As the system is self-organising, you must design, redesign, explain, or remove the rules to focus creativity not on beating those rules but on achieving their purpose.Read more →
Reference #388: Thinking in Systems
Systems often produce exactly — and only — what you ask them to produce. Its balancing loops work towards the goal or purpose of the system. When a system is designed towards the wrong goal, outcomes are often achieved that don't improve the welfare of the system.Read more →
Reference #389: Thinking in Systems
Seeking the wrong goal is almost opposite to rule beating. With the former, the system works towards the goal and produces a result that people may not actually want. In rule beating, the system seeks to evade an unpopular rule.Read more →
Reference #390: Thinking in Systems
Leverage points are places in a system where a small change applied can lead to a large change in behaviour. According to Jay Forrester, most managers experienced at working in their system are able to guess accurately where leverage points may be found. Yet these points are often counterintuitive.Read more →
Reference #391: Thinking in Systems
Meadows identified twelve places within a system to find leverage points. From lowest to highest leverage, they are as follows:Read more →
Reference #392: Thinking in Systems
Parameters, and adjustment of them, have very little leverage over system behaviours. Such adjustments include the price a company sets for its product, the minimum wage, or how much tax money is spent on a particular area.Read more →
Reference #393: Thinking in Systems
Stocks that are big relative to their flows are more stable than small ones. These big, stabilising stocks are called buffers. A lake, with its large stock and comparatively low (slow) flows, is very stable.Read more →
Reference #394: Thinking in Systems
Physical structures are crucial to a system, yet they are rarely leverage points as they are slow and often difficult to change. Instead, leverage in these areas comes from proper initial design.Read more →
Reference #395: Thinking in Systems
The impact of a delay in a feedback process is relative to the rate of change of the stock that feedback loop is trying to control. Delays that are too short cause overreactions and oscillations.Read more →
Reference #396: Thinking in Systems
Delays are not easily changeable — there's little you can do about the growth rate of a forest or the maturation rate of a child. However, you can often slow down the rate of change in a system. Even if you cannot adjust the delay, you can adjust the different between delay and change rate.Read more →
Reference #397: Thinking in Systems
By slowing a reinforcing loop (such as the rate of growth), you give balancing loops more time to function.Read more →
Reference #398: Thinking in Systems
Missing information flows is a common failure point in systems. Without feedback, balancing loops cannot keep the system in a stable and sustainable state.Read more →
Reference #399: Thinking in Systems
Rules, such as incentives and constraints, define a system's scope and its boundaries. The actor who has power over the rule has power over the system.Read more →
Reference #400: Thinking in Systems
Paradigms are from where systems are derived. Goals, information flows and more come from shared social agreements on how the world works.Read more →
Reference #401: Thinking in Systems
Recognising that no paradigm is true enables you to choose whichever helps you achieve your purpose.Read more →
Reference #402: Thinking in Systems
Before making changes to a system, first observe it to understand its facts and behaviour. Our memories are unreliable and systems and complex, and so we hold many misconceptions of system behaviour that can be corrected through deliberate analysis.Read more →
Reference #403: Thinking in Systems
Expose your mental models to the light of day. Make them rigourous. Be willing to change them. Mental flexibility is a must in a world of flexible systems.Read more →
Reference #404: Thinking in Systems
Those who regulate the flow of information have power.Read more →
Reference #405: Thinking in Systems
Wrote Fred Kofman, "we don't talk about what we see; we see only what we can talk about.Read more →