These are my personal notes on An Introduction to General Systems Thinking: Silver Anniversary Edition by Gerald M. Weinberg.

Since this is for my reference, it isn’t a complete index of all concepts discussed in the book. Instead, it is a list of general systems laws, principles, and axioms discussed within it. If any of it piques your interest, you should totally buy the book. It discusses each of these in far more detail, complete with examples and exercises at the end of each chapter.

Italicized or quoted text indicates direct quotes from the book itself. Non-italicized text is my own commentary.

The Main Article of General Systems Faith

This is that the order of the empirical world itself has an order which might be called order of the second degree.

In other words, the analysis of different, unrelated systems will produce similar laws to other systems. Most of this book discusses laws about laws.

There can, and will be, inaccuracies in these generalizations, but accepting inaccuracies is the only way to begin recognizing patterns.

The Law of Medium Numbers

Compare the Law of Large Numbers.

For medium number systems, we can expect that large fluctuations, irregularities, and discrepancy with any theory will occur more or less regularly.

Restated, it becomes Murphy’s Law: Anything that can happen, will happen.

Small number systems can be accurately analyzed because every component of those systems can be accounted for. Large number systems can be accurately analyzed because small fluctuations smooth out in the data. Medium number systems are a perfect storm of defying perfectly accurate analysis: too complex to consider all their components individually, yet too simple to absorb fluctuations. This book focuses on medium number systems.

In my observation, most computer systems are medium number systems, and so are many online communities.

The Law of Conservation of Laws

When the facts contradict the law, reject the facts or change the definitions, but never throw away the law.

“Never” is a strong word here, but the point is that system laws are established because they have proven to be a useful. (Keep in mind that this is discussing not just second-order laws, but first-order laws as well — laws that may be specific to a particular group of systems.) To discard a law altogether should be a last resort; to do so is to fly in the face of evidence and to take away a tool that is currently in use.

A modern example of this problem is the prediction of dark matter. Science fans sometimes complain that the existence of dark matter shows that the law of gravity should be thrown out. But to do so would be to discard centuries of observational data and predictions that have proven to be useful in the real world. It’s not that this is wrong to do in some moral sense, it’s that it is unproductive. Until another law of gravity appears that can make the same predictions as our current law, the idea of discarding it altogether is a waste of effort and time.

So, despite the discomfort that comes from thinking so abstractly, the Law of the Conservation of Laws demands that we assign an unseen actor to the large-scale gravitational observations that fail to match our predictions. To the layman, it appears that a new phenomenon has been “invented” out of thin air in a desperate hope to preserve an antiquated law so that scientists can continue to be right. But to the general systems thinker, it is the only practical way to advance our knowledge. Dark matter may be real, or it may not be, or it may be a combination of unknown factors; the reality that the system represents doesn’t matter here. Eventually, that reality will be discovered, but until then, the abstraction of “dark matter” makes it possible to work with the system and produce useful predictions in the meantime.

Keeping a law around, even an imperfect one, is the only reasonable action to take in the face of contradicting evidence.

The Law of Happy Particularities

Any general law must have at least two specific applications.

Because otherwise, how can you be certain you’ve found a law at all?

The Law of Unhappy Particularities

Any general law is bound to have at least two exceptions.

Rephrased: If you never say anything wrong, you never say anything.

The Composition Law

The whole is more than the sum of its parts.

As the author points out, this law goes back to at least the time of Aristotle.

The Decomposition Law

The part is more than a fraction of a whole.

A part of something can have more uses than making up one whole. A part may exist in multiple wholes at once, for example, but play different roles in each.

The Banana Principle

Heuristic devices don’t tell you when to stop.

From an old joke where a schoolboy reports: “Today we learned how to spell ‘banana,’ but we didn’t learn when to stop.”

This principle cautions against the over-application of any manner of thinking. Eventually, forcing a particular method or perspective on every situation will lead to an absurd result.

The Principle of Indifference

Laws should not depend on a particular choice of notation.

It doesn’t matter what name you give to something, only what that name represents.

Though, applying the Law of Unhappy Particularities here, giving something the wrong name is likely to lead to misunderstandings and erroneous conclusions later. Obfuscated code will run fine on a computer, but will run far more slowly through the mind of its unfortunate maintainer.

The Eye-Brain Law

To a certain extent, mental power can compensate for observational weakness.

A sufficiently experienced person can make predictions about a system based on very few observations. In other words, cleverness can compensate for impatience.

The Brain-Eye Law

To a certain extent, observational power can compensate for mental weakness.

A sufficiently patient person can make predictions about a system by taking many observations. In other words, stubbornness can compensate for ignorance.

The Eye-Brain/Brain-Eye Laws have some interesting parallels with the Heisenberg Uncertainty Principle, perhaps hinting at some deeper truth about the relationship between time and information in general.

The Generalized Thermodynamic Law

More probable states are more likely to be observed than less probable states, unless specific constraints exist to keep them from occurring.

This is a generalization of the First and Second Laws of Thermodynamics.

Restated separately: The things we see more frequently are more frequent: 1. because there is some physical reason to favor certain states (the First Law) or 2. because there is some mental reason (the Second Law).

In my opinion, this restatement of the First and Second Laws is the least defensible part of the book — it groups entropy and observer bias into the same phenomenon. PBS Space Time’s video “The Misunderstood Nature of Entropy” talks about the relationship between microstates, macrostates, probability, and entropy in a much clearer way. It’s a good supplement to this section.

The Lump Law

If we want to learn anything, we mustn’t try to learn everything.

This law is introduced halfway through the book, but it’s also a callback to the first chapter, where the simplification of systems is discussed in greater depth.

When considering systems, some level of detail must be discarded. This is true both in terms of the system’s parameters and in terms of how we regard states. It’s only through the careful discarding of information that a sufficiently complex system’s behavior can be analyzed — or even computed.

To borrow terminology from “The Misunderstood Nature of Entropy,” it’s not always useful to think about all possible microstates when considering a system. Sometimes you get a lot more from considering the macrostates instead.

The General Law of Complementarity

Any two points of view are complementary.

In other words, two observations of the same system are likely to be incomplete and unequal to one another.

The Axiom of Experience

The future will be like the past, because, in the past, the future was like the past.

This is an axiom because, without it, none of general systems thinking would work. Nor any scientific approach, for that matter.

The Invariance Principle

With respect to any given property [of a system], there are those transformations that preserve it and those that do not preserve it.

Reversed: With respect to a given transformation, there are those properties that are preserved by it and those that are not.

Restated: We understand change only by observing what remains invariant, and permanence only by what has transformed.

In other words, if we observed a system where its properties changed entirely at random based on any transformation (including none at all — the identity transform), nothing useful could be said about that system; it would be random noise and not a system at all. There must be some relationship between transformations and property changes.

For more ideas about what defines a system, see “The Strong Connection Law.”

The Perfect Systems Law

True systems properties cannot be investigated.

Weinberg’s definition of a “true” systems property is a property that cannot be preserved by any transformation of the system. In other words, a “true” systems property is inextricably linked to the identity of the system itself; making any change to the system will change the property.

Since they cannot be changed independently of the system, nothing can be learned about them.

Consider the letters in a word. The “written word” system can survive many transformations — typeface, color, size — but it cannot survive a change to its letters. Any change to these letters will result in the creation of a new system, a new word. Thus letters are a “true” systems property of a word, and nothing useful can be learned about their relationship to the “written word” system.

The Strong Connection Law

Systems, on the average, are more tightly connected than the average.

Restated: A system is a collection of parts, no one of which can be changed.

Systems are tightly coupled and put up resistance to decomposition. Something easily decomposed might still be regarded as a system, but it’s less useful to do so.

Restated: In systems, all other things are rarely equal.

It’s difficult to keep certain parts of a system fixed or separated from other parts.

The Picture Principle

While speaking about a dimensional reduction, insert the words “a picture of” in front of whatever you were about to say.

I think this could go further — any dimensional reduction produces a representation, of which a picture is a specific kind.  “A representation of” might be a better generalization.

This is a principle that exists to keep oneself out of trouble. Confusing a representation of something with the actual thing can lead us to draw conclusions based on the representation that do not translate to the actual.

The Diachronic Principle

If a line of behavior crosses itself, then either: 1. the system is not state determined or 2. we are viewing a projection — an incomplete view.

It’s expected that, in a state-determined system, a line of behavior will not cross itself. Every specific state leads to another specific state in the future. Feeding the parameters of a given state t into the system should produce the same t+1 state.

If a line of behavior crosses itself, that means one state can be the precursor to multiple future states, and that means that some information is missing.

The Synchronic Principle

If two systems occupy the same position in the state space at the same time, then the space is underdimensioned, that is, the view is incomplete.

For a complete view, every system must have its unique place, which is ultimately what we mean by “complete” and by “system.”

The Count-to-Three Principle

If you cannot think of three ways of abusing a tool, you do not understand how to use it.

It’s important to know the boundaries of a tool when putting it to use. Coming up with abuses is a good way to discover them.

The Principle of Indeterminability

We cannot with certainty attribute [an] observed constraint either to [a] system or [to its] environment.

The same system may produce different behavior in different environments. Observing it in different environments may increase confidence in the origin of a constraint, but it can never be attributed with absolute certainty.

The Systems Triumvirate

The “three great questions that govern general systems thinking”:

  1. Why do I see what I see?
  2. Why do things stay the same?
  3. Why do they change?

The Law of Effect

Small changes in structure usually lead to small changes in behavior.

Rephrased: Small changes in the white box usually lead to small changes of the black box.

The Used Car Law

  1. A system that is doing a good job of regulation need not adapt.
  2. A system may adapt in order to simplify its job of regulation.

Here, “regulation” refers to the operational cost of keeping a system running. The primary example given by the book is that of a used car. The car slowly becomes less efficient (e.g. requires more gas to run) over time. Beyond a certain threshold of inefficiency, it makes better financial sense to accept the cost of replacing a failing part than to continue to pay the higher operational costs caused by it.

All systems have some operational cost to run. Systems only need to change when that operational cost becomes higher than the cost of changing the system. This is a description of the evolutionary process that many systems undergo.

In software development, many developers have an easy time spotting that significant, or total, changes will reduce the operational cost of a system. Very few developers, however, have an intuitive understanding of the cost of those radical changes; see Joel Spolsky’s “Things You Should Never Do, Part I” for a deeper examination of this problem.

There is some discomfort when discussing the adaptation of systems, as adaptation means change, and over time, a system might adapt itself into an entirely different system, such as in the Ship of Theseus thought experiment. I suggest that the identity of a system is carried at that system’s boundary. In software development terms this might be considered the system’s contract; in more general terms, it might be the system’s name (such as “Ship of Theseus”). The components of the black box can change, gradually or all at once, but so long as the boundary is stable, so too is the system’s identity.

The final systems law stated in the book is the Used Car Law as rephrased for observers — really, for general systems thinkers:

  1. A way of looking at the world that is not putting excessive stress on an observer need not be changed.
  2. A way of looking at the world may be changed to reduce the stress on an observer.