Tyrant Model using Decision Process Theory

A classic game theory model for a tyrant might be a game of chicken, with a tyrant against the poor, each choosing between swerving or crashing. The difference however is that a tyrant has more power and influence than the poor. One aspect of game theory that we believe needs to be changed is that the size of the payoffs should matter.

As an example of something that plays no role in game theory, consider the concept of engagement. Multiplying the payoff matrix by the engagement does not change the max-min solution of game theory and is thus of no importance. Moreover, the payoff matrix strength is not relevant either. In physics, this is like saying that the charge of a particle does not influence its motion in a magnetic field, nor does the strength of the magnetic field. This is only true for a charge moving parallel to the field. The circulation around the field clearly depends on the charge as well as on the strength of the field.

The hard part is to disentangle the charge from the field strength. This is accomplished in physics by the field equations that say that charges and currents are the sources of the fields. Strong engagement leads to changes in the players behaviors through changes in their payoff and valuation fields. What is amazing however, is that engagement like viscosity enters the decision process theory equations in a non-linear way. Therefore, dramatically increasing or decreasing the engagement changes the qualitative behavior of the decision flow. You get an inkling of why when you realize that the engagement is itself a conserved flow and therefore contributes to the energy momentum of the system. It thus gets shared with other component parts.

Like viscosity, when the engagement is small it has almost no effect other than adjusting the time scale. As it gets larger, the non-linear nature of its behavior becomes evident. It is even possible that we might get “chaotic” or “turbulent” behaviors, though there is currently no basis for this conjecture. The following is the result of a computation (using a decision engineering notebook described in the Stationary Ownership Model Update white paper and implemented using a Mathematica notebook) in which the poor outnumber the tyrant by 10:1 and the engagement of the tyrant to the poor is 1:10.

We have the following interpretation of this picture. From a game theory perspective, the model has two Nash equilibriums. The poor see their strategy to be “swerve” and assume that the wealthy will “crash”. The wealthy see the opposite. This is one variant of the classic game of chicken. It is clear in the figure that by assuming the wealthy are much less engaged, we are closer to one of the Nash equilibrium: we see the Nash equilibrium of the poor, which is to “swerve” and for the wealthy to “crash”.

We see more however. Because of the inequality between the two players, the preference of the poor to crash is going to be smaller over time than for the wealthy to crash. So, in some sense, the wealthy crashing is not as devastating as it is for the poor. There is much less incentive for the wealthy to avoid the choice “swerve”.

The strategic flows add some interesting aspects to the story. The magnitude of the flows in this model run are all comparable. Nevertheless, they support the fact that the net behavior for the wealthy to swerve is zero and the net behavior of the wealthy to crash is small and positive. It is also clear that the net flow for the poor to swerve is much bigger than any of the other net flows.

We interpret this as follows. The poor will suffer and avoid the game of chicken. On average they will swerve. The wealthy will crash, making what appears to be the correct assumption that the poor will always swerve. These assumptions, however, rely on the small engagement of the wealthy. This appears to be the trademark of the tyrant They suffer no consequence for their action and are thus rewarded. The full decision process theory would go on to predict that the payoffs for the wealthy should not change since their source is small. The poor however might change their payoffs over time to better reflect what is happening.

Is there anything that can be done? One possibility is demand an equality of effort between the tyrant and the people, through laws, the courts or the press, as examples. An example computation is provided below, using a gravitational like field to force equality by imagining a potential well that is centered when the strategic effort of the poor matches the strategic effort of the tyrant.

The model generates a time component of the metric, which can be thought of as a “gravitational” field that pulls all strategies to this common center. We see the consequence of this in the above figure. We now expect that the tyrant will see consequences to their action, despite their initial lack of engagement. In fact it might be that that engagement will also change as a consequence.

 

Boundary Conditions and Common Ground

This is an inquiry on boundary conditions for decision process theory models and how important they are. In a way, the answer is trivial. Of course boundary conditions matter for any theory based on elliptic partial differential equations. The question is whether the boundary shapes play an essential role.

Let’s take an example from electrical engineering. A conductor shape with an applied voltage produces an electric field that is truly characterized by the shape, as much if not more so than the voltage (depending on the value of the voltage). To make the same claim for decision processes, we have to argue that there is something analogous to the “shape” in the space of strategies as well as something analogous to a conductor that dictates the field on the boundary of the shape.

We move in this direction in discussing the gauge conditions for the payoff field potential. we can argue that on the boundary it might be true that the payoff is “fair” in the sense that there should be no player bias field and that any strategic payoff moving a strategy from one direction in the surface to another should be zero. In other words, there should be no interest in changing the player engagements on the boundary and no strategic advantage to change payoffs that reside totally within the surface.

In transforming a game into a symmetric game for example, it is common practice to assume that there are no self-payoffs. The strategies owned by a player have a certain internal stability as seen by that player. Let’s call this property strategy neutral.

A surface that is both fair and strategy neutral is a strategically conducting surface. This is a new distinction. We argue that strategies that occur within a region bounded by a strategically conducting surface will be determined more by the shape than by the field values on the boundary. They thus form an important subset of surfaces to consider in the general case. These are surfaces that are strategy neutral.

A less imposing name might be common ground. This brings two concepts together: one from negotiations and one from electrical engineering. To get people to agree, one needs to go to a place which is strategy neutral for each party. The idea of a ground in electrical engineering is a surface with a constant potential: a conducting surface.

This definition of strategy neutral is based on the behavior of payoffs. However payoffs are only a part of the story. Payoffs result from payoff potentials that are the part of the measure between one active and one inactive strategy of the space. More generally, we have a metric  that represents a measure between any two points. This is a tensor potential {{g}_{ab}}  that generalizes the vector potentials {{A}^{k}}_{a} that determine the player payoffs, which are like the electric and magnetic fields in electrical engineering. In particular, if both points are active, then the tensor potential determines the frame transformation \mathbf{\omega }={{\omega }^{a}}_{bc}d{{x}^{c}}, whose differential is the flux flow through a 2-surface. The covariant and gauge invariant differential is R=d\mathbf{\omega }+\mathbf{\omega }\wedge \mathbf{\omega } and the analogy is that this flow is zero through the surface.

This flow of flux through a surface is a flow of the curvature of space, a flow of spatial bias. In decision processes, there are three distinct types of spatial bias flow, each arising from a distinct type of surface: there is the flux flow through mixed spatial frame between active and inactive directions {R_{{\bf{ak}}}}; there is the purely inactive dimension flux flow {R_{{\bf{jk}}}}; and there is the purely active dimension flux flow {R_{{\bf{ab}}}}. The constraints for each would be that the flow through the dual space *\left( {{{\bf{U}}^{\bf{\alpha }}} \wedge {{\bf{U}}^{\bf{\beta }}}} \right) is zero. This would imply for each case that {R_{\alpha \beta \alpha \beta }} = 0 with the appropriate changes in the subscripts.

We need to further investigate whether we impose all three conditions. We also need to think how these conditions would be applied using the linear recursion method. Nevertheless, the basic conclusion holds: common ground means that this degree of cooperation does not vary on the boundary. In other words, no point on the boundary is better or worse than any other from the perspective of the cooperation or competition.

We need to assess how much constraint we impose on this common ground. If we look at electromagnetic fields on a conducting surface, we find that in fact we have curvature components based on electric fields normal to the surface and magnetic fields in the surface. Based on this observation, we suggest using the idea of a weakly conducting strategic surface in which we impose only conditions in the surface. This allows a variety of effects due to the behavior along the normal to the surface.

This suggests that we impose only the Dirichlet conditions \delta {A^j}_{\bar a} = 0 for the payoff fields on the boundary. A similar argument suggests imposing in addition the conditions \delta {\gamma _{jk}} = 0 for the inactive components and \delta {g^{\bar a\bar b}} = 0 for the active components. As a consequence there will be some acceleration field flow through the surfaces. These are considered part of the initial “forcing functions” that may in fact yield insightful results about the decision process.

In picking a boundary in which there is common ground, we assume that the boundary is far from the action: that there are no strategic preferences. So what does it mean to be far from the boundary? In physical problems, far from the boundary has the intuitive meaning that we are far from where there are significant interactions. We accomplish this by going far enough in any direction. The scene of the action is left.

That is not how we currently think of strategies in game-theory. We think all values of strategies are possible. In Vol. 2 however, we took a different approach. We go to the co-moving frame. Here the numerical models with “locked” behaviors have the property that action occurs in a localized region. Not all parts of space are equally populated. If we go far from the populated areas, it makes little sense to talk about strategic advantage. This distinction persists in any frame. The populated areas are associated with regions in which game theory would apply. In Vol. 2 we took the stance that we should set boundary conditions in these populated areas. We now suggest a different stance and say we should go far away from such areas to where there is common ground and no strategic preference exists and set our boundary conditions there.

Are there resonant circuits in General Relativity?

For some time I have been exploring behaviors in differential geometries, one of which is the geometry of General Relativity. I have also  been looking at circuits in electrical engineering and was taken with the idea of resonances. They occur in circuits that have an inductor, capacitor and a resistance. Resonances also occur in physical media such as musical instruments, bells and bridges. In all of these cases, one can “drive” the system with an oscillating force and observe the possibility of resonance by the appearance of a super strong response to what may seem like a small driving force.

Alternatively, one can look at such systems without a driving force and start it from some initial condition; one then observes the system “ringing” or resonating for some time thereafter.

We approach such systems ordinarily as being described by linear equations and think of the two cases as examples of solutions to such equations that have both homogeneous contributions (the latter “ringing” solutions) and inhomogeneous contributions from the driving force. For the inhomogeneous contributions, a wave solution at a given frequency results in a steady state solution of the same frequency with an amplitude determined by the details of the media. For the homogeneous contributions, there is no initial frequency, but typically an algebraic equation determines the resonant frequency possibilities; there are only a finite number of such solutions.

You might think the situation is different in the differential geometries that I have been considering. The equations are not linear. There is no longer the presumption that small changes in the initial conditions lead to small changes in the resultant behaviors. For example one might get “chaotic” solutions from the simplest of forms.

However, I think one can still analyze such systems in differential geometry as if they were resonant systems. The basic idea is that resonance effects should manifest whenever the various energy contributions cancel in the sense that they leave a system that looks like there are no external forces. I can put this thought into the equations for a differential geometry theory.

{{g}_{ab}}\frac{d{{V}^{b}}}{dt}={{f}_{ab}}{{V}^{b}}+{{T}_{a}}

The left hand side is the acceleration in the theory, which depends in part on the metric field. In general the theories have isometries, symmetries that leave the metric unchanged, and these isometries lead to “electromagnetic fields” or Coriolis Forces. The first term on the right represents such a contribution. Finally there will be a host of other effects, which reflect various contributions that drive the curvature of space-time, such as inertial stresses. We think of these terms as forces that “drive” the behavior. They are the analog of forces in a circuit. If we take the analogy seriously, we speculate that resonance occurs when these forces cancel: {{{T}_{a}}=0}.

What I find interesting is that the equation that remains is a homogenous equation that has discrete solutions if the assumed flow is harmonic:

{{V}^{a}}={{\tilde{V}}^{a}}{{e}^{i\omega t}}

The reason is that the resultant equation is an eigenvalue equation for the frequency:

{{f}_{ab}}{{\tilde{V}}^{b}}=i\omega {{g}_{ab}}{{\tilde{V}}^{b}}

The possible frequencies are based on the eigenvalues of an antisymmetric matrix {{f}_{ab}}; the eigenvalues are zero or purely imaginary. Thus the solutions lead to a discrete set of real frequencies. I suggest that these are the resonant frequencies of the “circuit”.

There are a variety of interpretations, depending on the details of the differential geometry. For example, for general relativity, the antisymmetric matrix could be the Coriolis force and reflect a rotational frame. It could also reflect the electromagnetic field and the rotational field is the magnetic field. For decision process theory, the matrix can represent the payoff field. In each of these cases, one expects that there will be essentially a “free fall” solution that has a discrete set of frequency contributions. In addition, there will be solutions with any frequency, but such solutions will no longer be “free fall”. They will correspond to having a non-zero “driving force”. It is interesting that we find both types of solutions in the numerical exercises for decision process theory.

It is interesting to speculate on whether there are physical manifestations of such discrete solutions in physical processes such as transmission lines. I think the usual analysis does not indicate the possibility. The possibility might occur however if one considers transmission lines in magnetic fields or possibly with additional symmetries, such as circular transmission lines that rotate.

WTC-2014: Visualizing behaviors in differential geometries using Mathematica

Introduction

WTC 2014 slide 1

This talk was given at the Wolfram Technology Conference in October 2014 at Champagne, Illinois. The idea of the talk was to explore how one visualizes behaviors in differential geometries that are more complex than the usual three-dimensional flat geometries we are familiar with.

Abstract

WTC 2014 slide 2

We use Mathematica to visualize such behaviors. Our access to these behaviors are the partial differential equations that describe the shape of the geometry; these equations provide us a way to track the shortest paths. Shortest path algorithms are familiar from Newtonian mechanics, geometrical optics, Maxwell’s theory of electromagnetism and Einstein’s theory of relativity. My particular interest has been the possibility of using the shortest path algorithms of differential geometry to solve problems in the social sciences; in particular to gain insight into the causal behavior of decision processes. In this case, the starting point is the conjecture that decisions are events that form a continuous manifold in both time (causality) and space (strategies). This is a fundamentally different starting point from the usual stochastic approaches used in the social sciences. The resultant behaviors are shortest paths in a differential geometry space.

The challenge is to visualize these behaviors that are consequences of the differential geometry. This talk addresses that challenge. We apply Mathematica solutions to the partial differential equations and look at limit cycles and limit surfaces and their relationship to harmonic solutions and chaotic behaviors. We proceed with model solutions using Mathematica of more complicated differential geometry possibilities that have applications to gravitational theories and decision theories. We find that the NDSolve and Manipulate functionality can be used to great advantage in order to visualize the behaviors in these geometries. There are many other applications of these visualizations, such as teaching possibilities in Electro-Magnetism courses.

Outline of talk

  • Decisions as Geometry
    • Take an example www (work-wealth-wisdom) model
    • What is time? What are space components? How is uncertainty dealt with?
    • Idiosyncratic behavior and symmetry
  • Steady state solutions
    • Can steady state be chaotic? Jupiter red spot!
    • Examples from WWW–rotations, Coriolis, magnetic fields and payoffs
    • Need to solve complex stationary problem
    • Use harmonic series approximation
    • Do we exclude chaotic behaviors? See toy model . Build on this with harmonic solutions
  • Decision geometry is more complicated than in physics
    • Two different transformations to stationary frame–two versions of time flow
  • Communication travels at a finite speed
    • There are communication ellipses

Decisions as Geometry

There are many examples of geometry in the physical world, starting with the most familiar, the geometry of the earth. Many theories, such as Newtonian mechanics use geometry in a variety of contexts that extend the notion of purely flat space to spaces with curvature. The most striking example is perhaps the theory of Einstein, which maintains that our “normal” space is 3+1 curved dimensions. The specific behaviors of geometry that we find interesting are:

  • Position and Uncertainty
  • The nature of time
  • Symmetries

These are particularly interesting to us because of the possibility of using geometry to describe how we make ordinary decisions in the business and economic world. We have presented these ideas at past Wolfram Technology Conferences; many more details can be found on my website.

Position and uncertainty: Let’s deal with position and uncertainty first. At least some of us think of position as described in physical laws as certain, and think of “soft” concepts such as decisions as inherently uncertain. That is perhaps not the full story. It is true that at least starting with Newtonian physics, the concept of position is considered as something that is well-defined and certain. However, the measurement of position is not. Anyone that has measured a room knows that every time you make the measurement you get a slightly different number. Thus the key insight in Newtonian mechanics is to separate the concept of position from measurement: the certain concept from the concept that has a degree of intrinsic uncertainty.

We suggest a similar type of split is necessary for considering decision making, a split that has been made in the Theory of Games. The concept of utility can be applied to preferences as a well-defined concept of certainty. However the measurement of preferences involves observing the frequency with which we make our choices. For any given decision point, we make one choice out of several, which is uncertain. Nevertheless our preferences, like position, might be considered as certain. As with game theory, we take as a point in space, the collective strategic preferences of each decision maker.

Time: Let’s deal with time next. In physical processes, we can think of time as Newtonian, meaning we ascribe an absolute time as being applicable to all events happening anywhere throughout the universe. Alternatively, we can think of time as Mawellian, meaning we ascribe a local time to the events happening in our vicinity, and potentially a different time for events happening elsewhere. It seems to be a less efficient way of talking about time, but does allow for the fundamental fact in Maxwell’s description of light that light travels at the same speed regardless of frame of reference. Einstein took up this argument and concluded that in this respect Newtonian mechanics needed to be modified to conform with Maxwell’s description.

As a rule, we describe physical processes as causal in nature. Once we agree on a definition of time, we incorporate that description into the way we see the world. Events happen in succession with earlier events having the possibility of influencing later events in a strict cause and effect relationship. Not all events have such a relationship. Statistical events may occur in such a way that later events are independent of all earlier events. A particular type of such an event is stochastic. Wikipedia says: “In probability theory, a purely stochastic system is one whose state is non-deterministic (i.e., “random”) so that the subsequent state of the system is determined probabilistically. Any system or process that must be analyzed using probability theory is stochastic at least in part.[1][2] Stochastic systems and processes play a fundamental role in mathematical models of phenomena in many fields of science, engineering, and economics.”

What stance do we take? Do we assume a causal behavior or a stochastic behavior? The current literature favors the idea that decisions reflect stochastic processes. We take the opposite point of view: We follow the lead from those that have applied Systems Dynamics to a variety of real world problems and conclude that many real world problems can be treated as causal. Having taken this stance, the next question is whether we take a Newtonian view or a Maxwellian view. We have studied decisions starting from the formulations of the Theory of Games and note that the underlying concept of payoffs closely parallels the concept of magnetic fields and so adopt the Maxwellian point of view.

The key conclusions we make for our study of geometry of decisions is that it be based on positions as determined by strategic preferences and time as a local variable subject to the constraint that information move at a speed that is locally the same to every observer in the system. This is similar to the geometries envisioned by Einstein in his theory of general relativity. It is much broader however in that the number of dimensions is not limited, but depends on the number of pure strategies available to the collective of decision makers. The local nature of space and time so defined relies on the existence at each point of a set of potential fields, one for each dimension of space and time. A surface of constant potential is one on which we have a single well-defined value for the amount of distance or time is appropriate to that coordinate. We can call these coordinate surfaces. On a map of the earth, these would be the lines of constant latitude or constant longitude; these have a meaning on a flat chart despite the fact that the lines represent a spherical (or oblate spherical) earth. In higher dimensions these lines we call hyper-surfaces, or simply surfaces.

Symmetry and idiosyncratic behavior: Common physical systems often exhibit symmetries that are an essential part of how we think of them. The earth is symmetric about its axis; indeed it is almost spherically symmetric. Such symmetries are reflected in the way we describe the geometry. Let’s pursue as an example the axial symmetry of the earth. The oblate character is reflected in the behavior of the latitudes and the symmetry is reflected in the way we treat the longitude. There is no difference in the shape of the earth at different longitudes. Newtonian physics provides deeper insights: there is a theorem from physics and geometry that to each symmetry there is a conserved motion, which in this case is the angular momentum. It can be shown that the law of sines in trigonometry is a consequence of this conservation law. It is a further attribute of differential geometries that there will also be two associated forces: a centripetal scalar force and a Coriolis vector force. Identifying symmetries provides substantial insight into the geometry. In many cases, the symmetries are not immediately evident, though the attributes are. Locally, we may notice the Coriolis forces before we notice that the earth is not flat.

We also believe that there are important “symmetries” in decision-making, though as with the curvature of the earth, it may be that the attributes or consequences of the symmetries are more noticeable than the symmetries themselves. We find it notable that the game theory approach to decision-making uses the Nash equilibrium, a static scenario in which each player considers his or her version of the game, considers all the worst consequences that can happen, and from those chooses the best of the worst. It is often framed as a max-min choice. It is an equilibrium in the sense that every player makes the same type of choice, and no other choice works better for any player.

The idea we take away from this is that first, it is idiosyncratic: it is based on each player’s personal view of the utilities. Second, the max-min choice has relevant mathematical connections to geometry: the dynamic path or motion in a rotating frame has one “equilibrium” direction parallel to the rotation axis, which can be framed as a max-min solution to motion. Motion in any other direction will be influenced by apparent or Coriolis forces. For this talk, we don’t go into detail other than quoting the result that if we associate with each player a symmetry, then the consequence will be that there will be a game matrix associated with that player that dictates the relative utilities and there will be an equilibrium set by a max-min choice.

WTC 2014 slide 3a

WWW model: with these preliminaries of geometry out of the way we can briefly describe the decision model we will use as an example of complicated geometries. We call it the Work-Wealth-Wisdom model. The various figures or CDF models, such as the one above, have been computed based on the explicit model and numerical values chosen.

One common way in game theory to formulate a decision is to specify a matrix that indicates the payoffs. The rows correspond to the choices of one player and the columns the other player. From a mathematical perspective, this can be reformulated as a symmetric game in which the rows specify the choice of every player plus one additional row called the hedge strategy, and the columns also specify the choice of every player and one additional column called the hedge strategy.

WTC 2014 Worker Payoff and Equilibrium direction

This is the symmetric version of the payoff strategy for the WWW model. The model shown here is totally illustrative and the numerical values could be chosen quite differently. There are two players: the work-player and the wealth-player. Wisdom is how they play the game. The order of the rows (columns) reflect the wealth-player (player 2) strategy to invest wealth into the economy or to collect rent, then the work-player (player 1) strategy to earn or to “take” money, then finally the “hedge” strategy that we identify with time.

The model incorporates the idea that wealth has 10 times more value than the work and the consequence is that there are 10 times more workers than wealthy. The payoff to the worker is 1 if their choice is “work” and the wealth choice is “invest”. The worker gets nothing if the wealth choice is to collect rent. The payoff to the worker is 0 if the worker chooses to “take” and the wealth choice is “invest”. The payoff is 1/10 if the wealth choice is to “rent”. In the sense of game theory, no pure choice forms an equilibrium. The best choice for the work-player is to pick either choice with a 10:1 choice favoring “take”. The best choice for the wealth-player is a 10:1 choice favoring “collect rent”. From the standpoint of game theory, this is where we stop, except that we have a second game matrix for the wealth-player that is in principle independent. That is the sense that the choices are idiosyncratic.

In a decision geometry, the symmetric payoff matrix is in fact a rotation in the space of strategic decisions consisting of the two choices for the work-player, two choices for the wealth-player, two hidden dimensions reflecting the symmetry which generates each of the two payoff matrices and time. The space is seven dimensional, in which only five of the dimensions are “active” or not hidden. In our model, we assume that the sum over all the active strategies is a symmetry, leaving us with a four dimensional world in which three are “space” and one dimension is “time”. The model is thus parallel in some respects to the physical world we live in. This is purely for numerical convenience and helps us visualize what is going on more easily.

Geometry: It is worth noting that the concept of payoffs arises entirely from an inquiry into utility and economic behaviors. However from a mathematical point of view, we see it as a hint of an underlying geometry. It is possible to use mathematics to relate the payoffs to a concept of distance between points in this strategic space-time. We thus move the ideas above from a concept of coordinate strategic surfaces to the concept of a function of such coordinates that define a distance function on the space-time manifold. The payoffs are then just one aspect of this distance function. This is the context in which we view decisions as geometry.

Visualization: The resultant geometry is one whose structure changes over time in a well-defined way. Our challenge is to describe that change in a way that makes sense and can be pictured. An example may help: the surface of the ocean is an example of something whose shape changes as a function of time. On the one hand, in the normal coordinates of viewing, we see the shape constantly changing. On the other hand, there are behaviors that can be more simply described if we imagine we are a cork riding on the surface: from our co-moving perspective everything is stationary, though we see structures that when related to the normal coordinates appear as dynamic structures such as waves. We find the same is true in our general geometry. There are many types of solutions, some of which allow a co-moving description that is stationary. Though stationary or steady-state, the motion described is still dynamically interesting: we get standing waves for example. The steady-state motion is what happens after all transient effects die out. For the WWW model, we have positions in the co-moving frame that we call x, y and z as well as a proper time {\tau}. These relate back to normal coordinates of \left\{ {{u}^{1}},{{u}^{2}},{{u}^{3}},t \right\}. This will be our first type of visualization.

So for example with x and y constrained to be on a circle with fixed radius and arbitrary angle (a cylinder), we can ask what that behavior looks like in normal coordinates. That is shown above. We can also look at the velocity flow, {{u}^{3}} against t, as contours as shown below. For the WWW model, the income inequality generates a corresponding velocity. What are the normal coordinates here? The direction {{u}^{1}} is the relative preference for “work” versus “take”; the direction {{u}^{2}} is the relative preference for “invest” versus “rent collection”; the direction {{u}^{3}} is the relative intensity with which each player plays the game, work minus wealth. So in the graph below, the vertical axis is time and the horizontal axis is {{u}^{3}}. The strong direction along the horizontal axis reflects the increase in workers needed because of the assignment of value to wealth. This was put in as an initial condition; the model then shows the consequence of this assumption over time. Note that co-moving time contours are not constant in normal time. This reflects our way of dealing with time as similar to that of Maxwellian and not Newtonian theories.

WTC 2014 slide 3b

Steady-State Solutions

One way to characterize a geometry is by how one measures distances. For example we may use flat maps to plot a course over the ocean. On this map, the latitude and longitude may appear as orthogonal straight lines. However for an accurate course, we need to know how to relate these lines to distance; if we go far enough the distance is not Euclidean since the earth is approximately an oblate spheroid. Another way to characterize a geometry is to plot the paths that represent the shortest distance between two points. It is derived from the distance measure and reflects the curvature of the space. Thus, we may study the complexity of the geometry either by the shape of the surface or by the shortest paths. A complicated geometry is one in which the paths are complex. An example from weather might suggest what we have in mind: a hurricane or tornado both have cyclonic paths of wind, suggesting chaotic behavior. Ordinarily we associate chaotic behavior to the time dependence of the flow. Yet we can have cyclonic behavior that appears stationary. A good example of such a flow is the “spot” on Jupiter that has been around for centuries. In this talk we focus on such steady-state behaviors that remain after any transient effects die out. Of course there will also be behaviors associated with the transients, but they are the subject of a future investigation.

WTC 2014 slide 4a

We suggest that the origin of complex behaviors, such as chaotic behaviors, are similar in different domains of study, be they weather phenomena, electromagnetic phenomena or decision phenomena. In each case there are symmetries that give rise to Coriolis forces, magnetic forces or payoff forces respectively. These forces arise from common differential geometry structures.

To study these structures we would start with the fact that the cyclonic behaviors are the result of rotations and such rotations can be investigated by looking at the fields that generate the rotations. The following shows a “phase space” plot of the fields from the WWW model that are used to generate the overall payoff matrix.

WTC 2014 slide 4b

This field is anything but simple. The CDF picture can be rotated showing the type of complexity one might get. For the experts, we note that this picture is the set of limit surfaces for the z component of the gauge field, along with the partial derivatives of that gauge field along the other two transverse directions x and y. A sphere would be the 3-dimensional analog of a circular phase space plot in 2-dimensions , which is quite different from what is shown here.

Steady-State Solutions can be chaotic

WTC 2014 slide 5

Of course the solutions that we obtain for a geometry may depend on the assumptions used to do the numerical work. So we take a step back and analyze the types of assumptions that we have made. Our tentative conclusion is that we can apply a “harmonic-series” approximation and still identify chaotic solutions. This follows from a technical discussion using Mathematica as a computational tool.

Consider a gravitational source equation with non-linear terms:

{{g}''\left( z \right)+\sin g\left( z \right)=F\left( z \right)}

The source, which is the first figure above (green curve) is periodic and for illustration is a saw-tooth pattern. We scale it by a factor, init, which we vary. The equation is solved using NDSolve from Mathematica using periodic boundary conditions. It uses the method of lines to compute the function, red curve, and its derivative, the blue curve. For a sufficiently small factor value of init, the phase space plot shown in purple is a closed cycle. However past a critical value, the non-linear nature of the equation asserts itself and we cease to have  closed cycle. We suggest this is the start of chaotic behavior, though we haven’t used a rigorous definition to establish this suggestion yet. The factor value chosen in this figure is init=1.7.

WTC 2014 slide 5a

Use Manipulate to make chaotic behavior visible

The Manipulate function in Mathematica is a way to explore when the phase space plot ceases to be a limit cycle. We have considered different source terms, and in particular explored approximating the saw-tooth behavior as a Fourier series with a finite number of terms.

We need not extend the harmonic approximation to the solution. As noted above we use the methods of lines to obtain our full result. For more general geometries, we have also use periodic boundary conditions for all but one space direction. For time, we have considered a single Fourier frequency. For the remaining space direction we use NDSolve and the methods of lines.

General geometries however, may impose differential constraints on initial conditions. On the initial boundary we may have to solve a differential equation in one or more dimensions. The boundary conditions we may need to impose may also be periodic. Now however, the method of lines may not work if there are two or more independent coordinates. So what do we do?

Our solution was to expand the unknown functions as a harmonic series with unknown coefficients. We then substitute the function and its derivatives into the differential equations and use FindRoot to obtain the numerical values of the coefficients. In one dimension we can also use the method of lines as a comparison.

WTC 2014 slide 6

So, returning to the simple gravity source equation above, with a clearly chaotic behavior, purple curve, we also solve the same system using the harmonic approximation, green curve. Not surprisingly, the harmonic approximation is always a limit cycle. We see that the cycle fits into the more general solution. By changing the start point for FindRoot, we have found other limit cycles. So the chaotic behavior manifests itself in the harmonic approximation as multiple solutions.

For the decision geometry WWW model, using the harmonic series approximation, we get the limit surface, that we described earlier:

WTC 2014 slide 4b

That it is a surface and not a hyper-curve is a consequence of it being a harmonic series approximation. However, the specific surface may depend on the “sources”.

Steady-state analysis: We can now describe more fully the steady state analysis that we perform for differential geometries. We illustrate the technique using an example in one time and one space dimension:

-{{\partial }_{t}}^{2}{{x}^{a}}+{{\partial }_{z}}^{2}{{x}^{a}}+A{{\partial }_{t}}{{\partial }_{z}}{{x}^{a}}+B{{x}^{a}}=0

This form of a “wave equation” is typical of the harmonic gauge choice that is always possible to make in a differential geometry. The concept of steady-state is similar to that in electrical and mechanical engineering: As long as the functions A and B are functions only of the space variable z, we can replace the coordinate function with the product of a harmonic in time and an unknown function of space:

{{x}^{a}}={{g}^{a}}{{e}^{i\omega \tau }}

In other words, the harmonic gauge is then a linear equation in the coordinate. This reduces the partial differential equations to one of lower degree, one that is elliptic and depends on the frequency chosen. It has the form:

{{{\omega }^{2}}g+{{\partial }_{z}}^{2}g+i\omega A{{\partial }_{z}}g+Bg=0}

When there is more than one spatial dimension, the coefficients A and B depend on the coordinate so these become coupled elliptic partial differential equations. An example is the three spatial dimension model with one time:

WTC 2014 slide 6b

This steady-state solution could be viewed as a solution to a model from general relativity, decision process theory, or in fact any differential geometry model with three space and one time dimension. The boundary and initial conditions, however, will be specific to the problem domain.

We solve these equations using NDSolve, periodic boundary conditions for all but one spatial direction, and use the method of lines for the remaining direction. Constraint differential equations for the coefficients on the initial surface (all spatial directions but that reserved for the method of lines) will be solved using the harmonic series approximation to ensure that the coefficients remain periodic on the boundary.

Based on our investigations to date, we expect that the resultant solutions have the possibility of exhibiting not only “linear” and “near linear” solutions corresponding to simple limit surfaces, but more complex behaviors corresponding to chaotic-type behaviors such as space filling curves. We hope to be able to investigate what we believe is a quite rich structure embedded in these differential geometries. Our remaining examples will be taken from decision process theory, and in particular from the WWW decision model. We will focus on only two aspects out of what is in fact a vast wealth of possibilities: time and communication.

Decision geometry is more complicated than Newtonian physics

We start with a consideration of time and what it means in a differential geometry that consists of a generalized notion of time and space. We have a number of questions:

  • Which way does time flow? Yes we know it increases and we know it moves in some sense orthogonal to space, but in which direction is that?
    • Do we define the direction of time flow along the normal to the surface of constant time?
    • Do we define the direction of time as being orthogonal to a spatial volume element? The volume, dx\wedge dy\wedge dz, is the wedge product generalization to the “cross product”
  • Are these two directions the same or different in a general geometry
  • Which way are they in the WWW model?
  • How does the velocity we call now-\beta help us understand and visualize the distinctions?

Time direction: In Newtonian physics, the physics that perhaps comes closest to the way most of us were raised, time clearly moves forward and never backward. Moreover, it is clear that time is orthogonal to all of the directions of space: up and down; forward and backward; left and right. Starting from this perspective, we construct our geometry, which is Euclidean in nature. Time flows and has a value that is the same everywhere in space. We have no real evidence of course that this is true. In fact, modern physics experiments demonstrate that this is not true. It is not that our ideas are unfounded, but that we have attempted to extrapolate our ideas too far beyond what we have observed.

We observe time locally, just as we measure distances in space locally. In our current location, what we mean by a coordinate is a value that is the same in a small region around where we are, at the current time. What we mean by time is a small region around where we are where a common clock suffices to provide a value for the current time. Putting these ideas into mathematics is brings us to differential geometry. In this discipline, we talk about a surface on which a coordinate is constant. Such surfaces exist at each point and extend to a region around that point. The region is not arbitrarily large however; it depends on the curvature and topology of the space. For a Euclidean space-time, the region does in fact extend everywhere. For curved space this is not the case.

Normal time: From this mathematical perspective, we might then define the direction of time at each point as being along the normal to the coordinate surface associated with time at that point. For the steady-state WWW model, we can say more. The flow in that model is stationary in the co-moving frame. So at each point, we can say that the normal of the time surface is along that flow; the flow is moving at a certain velocity, which can be normalized as the ratio of the co-moving space component of flow divided by the co-moving time component of flow:

{{\beta }_{\upsilon }}=\frac{{{E}^{t}}_{\upsilon }}{\sqrt{1+{{e}^{\alpha }}{{e}_{\alpha }}}{{E}^{t}}_{o}}

This is the now-\beta. The additional square root in the denominator accounts for the energy due to the inactive component of flows that are associated with the idiosyncratic symmetries. The subscript in the numerator reflects the co-moving space components; the subscript in the denominator reflects the co-moving time component of flow.

We denote the co-moving directions as x, y and z; z is the direction in which we use the method of lines and the other two directions are transverse in which we use periodic boundary conditions. It is thus very natural to consider not x and y, but the radial directions such that

x=r\cos \theta ,y=r\sin \theta

Using this notation, we can fix the radius to a number and look at solutions for all angles. We thus map a cylinder in the co-moving space into a two-dimensional surface in the “velocity” space associated with one concept of time.

WTC 2014 slide 7a

The image above shows this surface for r=0.2 and z=0.05; the CDF file provides an interactive way to view the same surface.

WTC 2014 slide 7b

This figure shows the “velocity” surface at a different point, r=0.2 and z=-0.20. The two surfaces are quite different. If the flow were the same at every point, we would conclude that the geometry and time flow reflect some simple property such as the system is moving uniformly without rotation. This is clearly not the case.

Orthogonal time: Now let’s pursue a different approach, which is to consider that time flows in a direction normal to the space. To give this meaning, note that we are used to considering a small volume of space at each point and associating physical facts to such volume; the volume itself, the mass contained on that volume, the total utility associated with that volume, etc. Differential geometry also considers volumes, starting from the notion of a surface spanned by two coordinates, say x and y. The normal to the surface for x is a differential change dx; similarly the normal to the surface for y is dy. The surface area however spanned by dx and dy is related to the product, called the wedge product between the two: dx\wedge dy. It is the anti-symmetric product of the two vectors. The volume is the wedge product of any three independent normals: dx\wedge dy\wedge dz. It is the anti-symmetric product of all three.

In differential geometry, we form the dual to any wedge product by a rule that constructs a totally anti-symmetric product of the given wedge product and adjoining all the remaining directions in an anti-symmetric way. In a four dimensional space-time, the dual of the volume is a one dimensional vector:

*dx\wedge dy\wedge dz={{\varepsilon }_{\alpha 123}}d{{x}^{1}}d{{x}^{2}}d{{x}^{3}}

In general, hyper-volumes are described by forms. They generalize the cross product to higher dimensions. In particular, the direction orthogonal to the volume also provides a direction for time, “orthogonal time”. We can move along orthogonal time to get to the co-moving frame, but we must use the dual flows, suitably defined with the above dual operator: now-*\beta:

{*{{\beta }^{\upsilon }}=\frac{{{E}_{t}}^{\upsilon }}{\sqrt{1+{{e}^{\alpha }}{{e}_{\alpha }}}{{E}_{t}}^{o}}}

WTC 2014 slide 7c

The orthogonal time gives a velocity surface above for the now-*\beta that is quite different from before for r=0.2 and z=0.05. It visually evident from the shape and not just from the numbers; though the numbers are quite different as well.

WTC 2014 slide 7d

As before, the shapes are not the same at different points. Here is the point r=0.2 and z=-0.20. It is instructive to rotate these shapes and explore their detailed characteristics.

We seem to conclude that time is not an absolute, but a quantity that has some substance. It can be viewed in a variety of ways and shows different characteristics from what we are used to from Newtonian physics. The images here from Mathematica help visualize these differences. They arise from the different ways we can view the flow to a co-moving frame.

Communication has a finite speed

Closely associated with time is the speed with which events communicate with each other. We certainly know that this communication is not instantaneous. Nevertheless, we have that idea from Newtonian physics and it is built into our way of thinking. It is hard to contemplate the consequences that communication has a finite speed. Communication is information flow and certainly a great deal has been written on that subject. From a causal point of view, what can we say?

From our study of physics, we learn that Maxwell had it right about communication. Light, in that theory, always travels at the same speed. This is surprising since it means a beam of light originating from a train travels no faster or slower than one on land as the train passes it. This is not in conformance with our notions of relative speed from Newton. It was this idea that was modified by Einstein. It certainly impacts our way of thinking about differential geometries, since some will behave in a Newtonian fashion and others in a Maxwellian fashion. It is our choice to pick the latter.

The consequence is that we view space as Riemannian and that there will be an analog of “light” that will travel at the maximum speed, corresponding to a zero path length:

d{{\tau }^{2}}={{g}_{mn}}d{{u}^{m}}d{{u}^{n}}+2{{g}_{mt}}d{{u}^{m}}dt+{{g}_{tt}}dtdt=0

In this expression the coefficients are summed over the spatial indices m and n. For example in the WWW model, the sums go from 1 to 3 and the direction {{u}^{1}} is the relative preference for “work” versus “take”; the direction {{u}^{2}} is the relative preference for “invest” versus “rent collection”; the direction {{u}^{3}} is the relative intensity with which each player plays the game, work minus wealth.

The idea that communication travels at a finite speed makes sense for any theory about our world, not just physics. But it has consequences when translated into differential geometry. There is a presumed measure of distance or metric between any two points given by the above rule. There is thus a relationship  between the velocities or flows for maximal communication set by the surface in terms of this measure or metric:

{{{g}_{mn}}{{\frac{du}{dt}}^{m}}{{\frac{du}{dt}}^{n}}+2{{g}_{mt}}{{\frac{du}{dt}}^{m}}dt+{{g}_{tt}}=0}

What this means is that the maximum communication flow must lie on this ellipsoid. All communications must lie inside. This imposes constraints on the possible values of the metric elements, which must vary continuously from their initial values.

WTC 2014 slide 8a

Here is a sample surface computed from the WWW model at a point. As one moves from point to point, the shape changes shape and can in fact turn into a hyperboloid. In Mathematica we use Manipulate to study this. A hyperboloid allows communications that occur at zero speed or infinite speed, neither of which are reasonable. Since the coefficients of the ellipsoid, the metric elements, are computed from the differential geometry equations, there are constraints on these metric elements.

We can gain additional insight by considering not just the velocity flow of communication, but the dual normalized \beta-flow:

{{\pi }_{m}}=\frac{{{g}_{mn}}\frac{d{{u}^{n}}}{d\tau }+{{g}_{mt}}\frac{dt}{d\tau }}{{{g}_{tn}}\frac{d{{u}^{n}}}{d\tau }+{{g}_{tt}}\frac{dt}{d\tau }}

It is the dual of the communication flow. We argued above that there was a difference between flow velocities related to the space and its dual. The dual was related to flows that we distinguished by whether the indices were upper or lower. Again, we are making that distinction here, to help us remember that this is not the flow but a linear combination of flow components. The requirement that communication be finite can be written in terms of these components and the inverse of the metric:

{{{g}^{mn}}{{\pi }_{m}}{{\pi }_{n}}+2{{g}^{mt}}{{\pi }_{m}}+{{g}^{tt}}=0}

WTC 2014 slide 8b

This shows the result for an initial point for the WWW model. Again, as we move away from this initial point, we require that the shape stay an ellipsoid. If we get zero or infinite \beta flow, this is not physical. As before actual communication lies inside this ellipsoid. This imposes constraints on the inverse of the metric formed from the metric matrix elements {{g}_{mn}},{{g}_{mt}},{{g}_{tt}}. Checking that we maintain the communication ellipsoid and its dual is easy and intuitive.

Summary

We have indicated several concepts that are distinct when we move to more complex geometries:

  • Steady-state solutions can be more complex
    • limit surfaces that are complex
    • Chaotic behaviors
  • Visualization challenges
    • Dimensionality
    • Gauge Invariance
    • Conservation laws (symmetries)
    • Normal versus orthogonal time
    • Finite communication speed
    • Communication ellipsoids
  • Application to decision geometry
    • WWW model
    • Provides insight into decision-making
  • Applications apply equally well to general relativity
  • Speculation: can one extend these ideas to engineering?
    • Especially the ideas of harmonics and harmonic series approximation

 

Propaganda, Narrative and Geometry

We often hear the word propaganda in the press and in the media. I wonder what kind of force this represents. The competitive force is always the one that comes first to mind. This is because I think of economics and social behavior. Yet propaganda is the ability to set the narrative, it is the ability to change a person’s mind. How does one include such a force in a detailed theory of behavior? Is the narrative a gravitational field that draws all under its sway to move in the same direction, friend and foe alike? Does this field owe its strength to some type of concentration of mass or energy at some strategic location? Might we call it strategic capital? How do I see that such a narrative effectively describes the process? Is this narrative distinct from other cooperative forces as well as any competitive forces? How would I tell?

To approach an answer to these questions that might be acceptable to most reasonable audiences, I anticipate that I must overcome a general fear such audiences have of numbers being used to describe human behaviors. Perhaps it is more general. In many cases we don’t like using numbers even when describing physical events. Take our conversation of weather. Are we more comfortable in reducing weather to a yes/no prediction of rain tomorrow than in understanding the complexities of airflow, humidity and pressure as a function of time? The former is like a person deciding, albeit an unknown weather god. The latter involves taking a stand on understanding. I think we are most comfortable with a yes/no prediction.

What is involved in the more complex understanding? Weather phenomena are not the acts of a capricious god but are the result of a process involving interacting parts spread out in both space and time. This process occurs in a continuum over space and time. What is happening here and now is dictated by what has happened elsewhere in the past. This is true at every level of scale, not only from a macroscopic but a microscopic perspective. A process view is based not just on a qualitative and discrete yes/no understanding but a quantitative description. The quantitative description provides the geometry; without this geometry the process is hidden.

But, you raise the valid objection that social phenomena are inherently uncertain and intrinsically about decisions which are discrete yes/no type outcomes. How can such events possibly be described as part of a continuum? My choice at this instant does not continuously flow from choices made in the past. It is a gamble and I might in fact, if given the chance, make exactly the opposite choice. Yet even in physics we have phenomena at the quantum level that from a certain point of view are uncertain and if viewed in a certain way, appear to be discontinuous. That fact has not prevented us from looking at them from another point of view as being described by a differential geometry in both time and space. Indeed, we suggest that game theory has provided us a perspective that social interactions in fact can be viewed geometrically if we focus on the mixed strategies and how they evolve in time as opposed to the pure strategies that focus on the uncertainties. We don’t focus on the actual decision, but on the mixture of decisions that are possible at any point in time.

As an example, consider that over the last decade, wealth has redistributed itself dramatically (Piketty, 2014). It has not happened discontinuously. It has evolved over time and appears to change continuously across social strata. Some members of the middle class have become wealthy whereas others have become poor. The changes reflect a process, not a capricious set of changes. The process is even more in evidence over long time scales of centuries. These then might be examples that we can focus our attention.

Suppose we analyze the flow of wealth and assume for simplicity it is distributed to two distinct populations. If the populations are valued similarly and if the interaction is zero-sum, we expect each to receive the same payoffs in the sense that what one wins, the other loses. But what if one population believes their payoff, if they win, is much higher than the other population?  We still achieve a zero sum type game if we balance the product of the population and value for each side. If it is agreed that one side is 10 times more valuable, then this works if the other side has 10 times the population. What matters here is the interplay between competition and propaganda, since there may in fact be no objective reason for the valuation other than a possibly enforced agreement. This then is an example from the data supporting the geometric geometry view, since we can see how the flows change as a function of wealth distribution.

So we return to the question of propaganda and narrative, which we view as a question of process. In the above example, the valuation of wealth is felt equally by both populations, yet it may not be factually based. From a theoretical point of view that is really fine. We are not establishing the “truth” of the valuation, but the outcome given that both sides adopt this “truth”. Whether this is a useful exercise is ultimately a question of measurement and quantitative analysis. In a differential geometry theory of decision processes, we look for confirmation in data (behaviors) that highlight the existence of the process. For example, for weather predictions, it is not enough to predict rain versus not rain; rather we must predict in addition behaviors that change continuously with space and time like air flow. Therefore, for social behaviors we must look for flows, as an example, which change continuously with strategic position and time.

We seek to gain understanding from two distinct directions. We look at data to be convinced social behaviors are in fact geometric processes and we look at results of theoretical simulations to be convinced that geometrical processes might explain such data. At some point we hope that these two approaches meet, even though we are not there yet.

Nina Byers

My thesis advisor, Nina Byers, passed away June 5, 2014. I just want to say a few words here about her.

I was a Ph. D student with Nina. I remember her being easily accessible, especially for those of us that were at the University late at night. Nina was a great teacher and thought deeply about the problems in physics. She stressed the importance of character in becoming a researcher as opposed to following fashion. Intellectual honesty was a value she passed on. She always suggested the importance of following an idea, wherever it might lead and thus didn’t really see physics as a cell one had to stay inside of. I have taken that to heart in my researches as articulated on this site.

Not too long ago, I wrote to her about what I was up to and reflected about how knowledge gets passed down through the generations. It occurs within an extended family. She was schooled at the University of Chicago. Through her help I worked at Argonne National Lab, run by the University of Chicago. My daughter got her law degree from there. Now my grandson goes to Lab School run by the University.

There are other ripples as well, since I, along with another of her students, Paul Stevens, went off to Oxford to join Nina for almost a year. From there I got the bug to work in Europe, and was fortunate to get a post doc at CERN. Nina taught by example that research was an international quest. My daughter has continued that quest, albeit not in research but in law. She carries out her law practice with regular trips to an office in London and elsewhere outside of the US.

I will miss Nina. I will forever be indebted to her for what she gave to me and the positive impact she had on my family (past and present), though they did not know her well and we’re probably not aware the huge debt we all owe to her.

 

Inequality in Decision Processes

It seems to me that inequities start from the following type of conversation. There is a misperception among people I talk to that I actually wish to contribute to their capital in some way; that they have some enormously interesting endeavor, which I am dying to become a part of. If I contribute in this way their capital grows. There is no reason to suppose that my capital grows as an outgrowth of this exchange. If one person is wildly successful in getting people to contribute to their capital, they will amass a large fortune at the expense of many contributors, each of which lose little perhaps, but also each gains little.

This phenomenon may be related to the ideas in a new book on capital that speaks about the growing inequality of capital (Piketty, 2014), though capital is now used in its proper sense. From a theoretical viewpoint, I think this is related to decision process theory. In that theory, substances tend to accrete into big piles. When these piles become sufficiently large, they may create stable structures. I have not shown that to be the case, but such behaviors occur in physical theories that are similar. If it did occur, we would have a dynamically created code of conduct.

The idea of capitalism, as expressed by some, is that all boats rise together so there is an advantage to rewarding the entrepreneur. The argument against, by others, is that vast inequalities may arise in which one class does not rise at all. Such class members get submerged, to continue the metaphor. A theory should be able to demonstrate both behaviors and give the necessary and sufficient conditions for each behavior to be stable. In the theoretical view, for the first argument, each individual’s payoffs or desires cease to be important; only the payoffs of the dominant player are important. In this way their desires are not satisfied. At some point they must complain. As individuals this may not be sufficient to change the new status quo. Hence they must unite in order to make an impact.

This outcome need not be the only one since a modest redistribution of effort might honor the payoffs of most people while cutting back only slightly the capital going to the dominant player. Alternatively, there may be a world in which there is no dominant player, only a dominant strategy, one that everyone individually desires. I think that that strategy can’t be becoming wealthy (in terms of money) since that means most people will become poor, in terms of money. Of course the theory does not require that outcomes be measured in terms of money, only in terms of things that are valued by the player.

Nevertheless, suppose we buy into the concept that we should act based on our self-interest. This is supported perhaps by the theoretical view that we act based on our internal view of the payoffs. I say perhaps because it is not a given that our internal view of payoffs totally reflects our self-interest. To act purely in our self-interest against others who also purely act in their self-interest suggests a great deal of effort needed on our part to defend ourselves from attack. If we could agree with our fellow antagonists on some ground rules for such fights, we might be able to reduce the time we spend on defense and increase the time we spend gaining those goals we really strive for. Such an agreement on ground rules changes the nature of the interactions by adding a code of conduct. We need to be regulated. The best regulation is one that is minimal; it is best from the standpoint of where we start from, which is as antagonists.

Of course we might start from another position of being communal, in which case we might have somewhat more regulations. However there would be I think a need to limit the regulations so as not to totally smother the members of the community.

This line of thought suggests a zero sum or constant sum code of conduct. If we give something of value we should get something in return of comparable value. This “regulation” prevents outright theft and banditry. If nobody feels they are being taken advantage of, they need not spend their time defending their possessions. I suggest this might be the basis for monetary systems. I note that such regulations necessarily reflect conserved quantities in the theory. Perhaps there are reasons why such conserved quantities come into existence and form relatively stable structures. For the moment, let us just say that we have provided plausible reasons why they might exist.

So how do we proceed? I think we have to imagine what the code of conduct would be in a variety of stable configurations. Each configuration might have several components. To start, I suggest making the distinction of fair exchange between two players j,k: this occurs when their distance scales {{r}_{j}}-{{r}_{k}} is inactive. A related distinction is when the strategies {{s}_{{{i}_{1}}}},\cdots ,{{s}_{{{i}_{n}}}} form a coalition: this occurs when their sum is inactive. I think this can be generalized in a nice way to mean that these strategies form a surface. A special case is the constant sum game: the coalition consists of all players and their strategies. Although rather simple, I think these two distinctions revise both chapters 6 and the beginning of chapter 8. There is I think a new taxonomy to be considered. It includes the cases currently done but does suggest new possibilities, such as four strategies all part of a single player.

In suggesting revisions, I have in mind keeping the distinction of the internal (hidden) nature of the players. Their nature is hidden in the sense of hidden variables and constraints that I don’t wish to explicitly take into account. These hidden variables are private and inactive. They result from processes that are not publicly shared and take no direct part in the strategic decision processes. As with constraints in physical processes, we don’t need the details of these processes, just their effect in the energy considerations.

I also keep the notion that there are active variables, which all players agree are part of the decision making process. It may happen that some of these variables or collections of these variables are part of a code of conduct, which could mean there is a coalition. A coalition acts just like an inactive variable in all respects, except that these inactive variables are public; all players are aware of these degrees of freedom and so they form a necessary part of the solution to any decision problem. To make the distinction clear, when there is a doubt, I now call such variables code-inactive. I include the fair exchange and possibly other distinctions as they become known in this set as well. I will call the hidden inactive variables private-inactive.

Another possibility for a code-inactive variable is time. Time is interesting since we normally don’t think of time as a strategy. Yet I have considered time as the hedge player that allows a symmetrized version of the game. It thus has structural significance. As a stretch distinction, it is the learning strategy which brings us into the future. From this formulation, it is clear that time and space strategies are being treated on the same footing, with the exception of the metrical difference with time. Thus for time to be a code-inactive variable, I suggest that learning is occurring at a steady (conserved) rate. The code of conduct thus may consist of different types of code-inactive variables: the learning strategy; coalitions; and fair exchange.

For every inactive variable, there will be a payoff matrix. This is true whether or not the variable is private-inactive or code-inactive. For any solution of the active variable problem, there will be a reduction to an equivalent problem that includes code-inactive variables. This problem will have payoffs for the expanded set of players that includes the code of conduct. Once solved, one can go back to the original problem and compute the payoff matrices for each of the private-inactive players. These are the native players that provide the definition of the private-inactive variables. This has been the approach of the numerical computations carried out so far.

What provides new insight is that these different types of codes of conduct, in different combinations, lead to different solutions; we have expanded the taxonomy of possible behaviors. We have to understand in more detail what decision process we have in mind. For example, the notion of a game is centered on the idea of their being payoffs. Whenever we have payoffs we essentially have a game. For any game we should look for the code of conduct which defines the game; it is the strategy that is orthogonal to all the strategies that are active in the exchange.

We say more by characterizing the types of forces that must be associated with a code of conduct. We take this from the theory. First and foremost is the exchange or payoffs that reflect a competitive push-pull force made famous in game theory. It has analogs in physical theories: Coriolis forces in mechanics, magnetic forces in electromagnetism. The second force is less well known in economics, but we believe is also important: a gradient force whose behavior is set by surfaces of constant potential and whose strength and direction are set by the gradient normal to that surface. When the learning strategy is part of the code of conduct, we expect and see in our computations a gradient force that encourages an accumulation of value, of wealth. Other codes of conduct produce forces that discourage such accumulation. In physical theories, gravity and centripetal forces provide examples respectively. We suggest looking for such forces for each code of conduct.

You might consider it natural that the overall decision is characterized by a coalition of the whole: it is a constant sum game. From this we expect payoffs or exchanges. Indeed it is also natural to consider that economic behaviors in general are characterized by the exchange of money and value. Moreover, economic behaviors are also considered to involve fair exchange. We see a strong reason for agreeing with (Von Neumann & Morgenstern, 1944) that such behaviors should be considered as constant sum and fair exchange games with a learning strategy. This however does not mean every application of decision process theory should be thought of as such; only a subset of applications with this particular set of codes of conduct.

For example, we can have a Robinson Crusoe environment in which the total scale is not constant. This works not only for a single strategy controlled by one person, but for multiple strategies controlled by a single person. Clearly this can be generalized to multiple players where the overall sum is not a constant. These are not constant sum games. We might envision that of such possibilities, there will be those with a fair exchange between every pair of players, which thus identifies a different code of conduct. This is not the game theory of (Von Neumann & Morgenstern, 1944). Yet it is an interesting model that we can study in detail. A particular example to revisit might be the prisoner’s dilemma.

We are now ready to return to the question of inequalities in economic behaviors. Why might one group get rich and another group get poor? In our formulation of game theory, we consider not only games that are constant sum, but games in which there is fair exchange between each of the players including the learning strategy. Thus each player scale  is part of the code of conduct. If we relax the idea of fair exchanges and even perhaps the idea that the learning strategy are inactive, then our results may generate inequalities and thus shed light on how inequalities would occur as a consequence of the theoretical assumptions. We might learn whether such behaviors are stable or not. To date, we have not focused on behaviors associated with inequality in our numerical computations. This is something that should be done.

Wolfram Technology Conference 2013

The following is based on the talk given at the 2013 Wolfram Technology Conference held October 21-23, 2013. The talk was presented as a Wolfram CDF slide show and is reproduced here along with segments that can be executed using the free Wolfram CDF Reader. Since the slides don’t capture everything that was presented, I have also provided additional commentary.

2013 Wolfram Technology Conference Talk p1a

Introduction

This talk continues an inquiry into the relationships between decision making, determinism and chaos: an initial in-depth exposition can be found on this site. Decision making is one of the most human acts and seems to be the most difficult area to formalize into a theory of behaviors that are causal and deterministic. In fact one might think that the very nature of decision making is one of chance and uncertainty. One issue we think relevant is a lack of appreciation on how causal theories deal with uncertainty. In the view of many, there is insufficient understanding of the sensitivity of the dynamic behavior on the initial conditions. When these issues are taken into account, it becomes easier to see the possibility for causal formal theories of human decision making. See a related and much earlier MIT paper on this subject based on Systems Dynamics, Sterman et al.

One thing that might help our understanding of these issues is to explore how we deal with uncertainty in such mundane activities as measurements, which form the basis of physical theories. So for example, we understand by length, an attribute that characterizes the height, width or depth of something. It is a great accomplishment in understanding to separate this concept from the mechanisms by which we perform the measurement. The mechanisms involve a measuring stick and us as agent; for those of you measuring a basement, you know that multiple measurements yield multiple answers. Yet we are confident that the basement has well defined dimensions. How did we come to this conclusion and how did we learn to separate out the uncertainties associated with us as agent and the intermediary of a measurement stick, from the invariant concept of length? I think we all agree that the separation has been done and we are comfortable with the idea of length.

This distinction in measurements can also be made when making decisions. There are aspects of decisions that are uncertain which mask potential underlying relationships that are causal. As an example I may choose between several strategies according to a fixed set of frequencies that behave according to a deterministic causal theory, yet there can still be an uncertainty in measuring the exact numerical values of these frequencies. More importantly I may not know what choice I will make at some future time even though the frequency of choices is known. There is no requirement that a causal theory of decisions needs to address that issue. Indeed to construct a causal and deterministic theory we may be better off not including such uncertainties in our theory, just as in physics we are better off not including the measurement process. Such processes are usually treated as stochastic with no dynamic structures.

We distinguish such measurement uncertainties from the dynamic uncertainties that result from sensitivity to initial conditions. The latter lead to chaos, showing that deterministic causal theories can have rich dynamic structures with understandable regularities, in contrast to stochastic behaviors. To present these important concepts we proceed as follows:

  1. introduction
  2. decision making
  3. determinism and chaos
  4. elasticity
  5. determinism and chaos in decision theory
  6. fixed frame models–complete solutions
  7. conclusions

2013 Wolfram Technology Conference Talk p1b

Decision Making

We note that many decision making behaviors that appear to be uncertain may well have causal elements. For example the behavior of the stock market, which is clearly an example of decision making, appears often to have elements that are uncertain. However if we observe the time series behaviors of the stock market using recurrence plots, we see causal systematics:

2013 Wolfram Technology Conference Talk p1c

Here we have captured the behavior of a particular stock using Wolfram Alpha and created its recurrence plot. The details can be found in the CDF file, which allows the reader to further explore such plots using different stocks.

Determinism and Chaos

The other extreme of uncertainty would be to consider behaviors in the physical world which are assumed to follow well understood laws. A simple example can be taken from physics and the motion of a pendulum. Here non-linear spatial behaviors can also manifest as non-linear time behaviors, including behaviors called chaotic.

2013 Wolfram Technology Conference Talk p2a

Without making a small angle approximation, for low amplitudes the behavior is periodic. There is no structure. However, the force on the pendulum is not proportional to the angle but to the projection of the force along the vertical direction. This introduces a non-linearity. The consequence can be highlighted in a number of ways. One is to provide a oscillating harmonic force on the pendulum. Alternatively, one can start the pendulum with an initial but large velocity. Here is a typical result.

2013 Wolfram Technology Conference Talk p2b

The CDF file allows the reader to experiment with different initial conditions and different values for an external harmonic force. One of the consequences of this deterministic behavior is chaotic behavior. By this we mean that the behavior depends sensitively on the initial conditions. Even though the equations are causal the resultant behavior appears uncertain; in reality the uncertainty is an artifact of the initial conditions, not of an underlying uncertainty in the physical process. The underlying dynamics impose regularities on such chaotic behaviors that have been extensively studied. There is an inter-play between what happens in space and what happens in time.  We believe such interactions also occur in decision making.

Elasticity

We believe that decisions have not only a causal connectivity but a strategic connectivity. We adopt conceptual game theory that decisions are characterized by players or agents that each decide from a list of pure decisions, assigning a frequency of choice to each. From a mathematical perspective each game is decided by these frequencies which represent a point in this space of strategies, spanned by all possible frequency choices. we focus on games which are played multiple times so that we observe both the time evolution and the spatial connectivity. The hypothesis is that decisions change continuously in time and strategy space. We call connectivity in time causal. We call connectivity in strategic space elastic. Spatial elasticity may be related to network connectivity, whose importance is argued by Barabasi.

Game theory, as opposed to our restricted version of conceptual game theory, usually deals with an equilibrium situation, so the idea of causality and elasticity are outside of the scope of the theory. They enter indirectly in the problem statement. We make arguments about the payoffs, which are the key elements of the theory that lead to a specification of the strategic frequencies at equilibrium. what we ordinarily don’t do is assume that the payoffs can change over time. We also don’t consider the possibility that the payoffs and the frequencies are not simply tied by an equilibrium condition.

To explore how this works, we can take any game and see how various decision attributes such as payoffs and frequencies might change over time. We can use the sliders in the CDF model to simulate time. As an example we take the children’s game of rock, paper scissors.

2013 Wolfram Technology Conference Talk p3a

In practical terms, a way to do better in the game is know something about your opponent. If they don’t like playing certain strategies, you are better off adjusting the payoffs to reflect that. As they learn about your behavior, they adjust their behavior causing you to readjust your behavior. You can experiment with the above model to see the effects. In this picture, we assume that the payoffs are in fact given by their equilibrium values. Shortly we remove that restriction.

What we learn from this is that there are degrees of freedom that can change independent of time. We call it space as an abuse of language: we mean strategic space, not physical space. We anticipate that changes in space will influence changes in time. A simple model to illustrate a network model with elasticity is the following:

2013 Wolfram Technology Conference Talk p3b

If you change the spatial variable, the time recurrence pattern changes. It illustrates what we see in the general theory without the corresponding mathematical complexity. The key concept we propose is that decisions are causal and elastic: they depend jointly on time and space variables. Because of the interconnectivity, the resultant behavior is analogous to an elastic medium, which can exhibit waves that propagate, reflect and possibly die out. This goes beyond game theory as well as simple Systems Dynamic models. We now make these ideas more precise.

Determinism and Chaos in Decision Process Theory

Just as in physical models, we have a causal dynamic model in mind. We take these ideas from decision process theory and the behavior along a streamline:

2013 Wolfram Technology Conference Talk p4a

We start with a set of equations:

2013 Wolfram Technology Conference Talk p3c

The first equation sets the rate of change of the frequencies that determine strategic choice. On the left is the rate of change of those changes and on the right is the effect on those changes based on the payoffs. When the payoffs are zero, the rate of change on the left is zero. We consider more general changes here. The second and third equations determine the response of the equations to harmonic forces, analogous to those for the pendulum.

Time dependence occurs in the above equations even if the payoffs are independent of time. We see that the frequencies no longer are constants but may oscillate and exhibit other dynamic behaviors just because we are no longer at equilibrium. Shortly we will also consider the possibility that the payoffs also vary in time. But for now assume that they are constant.

We consider a prisoner’s dilemma model in which we can change the various parameters that govern the payoffs. In addition we can change the relative weights of the payoffs for each player, which can introduce non-linear effects independent of the harmonic forces.

2013 Wolfram Technology Conference Talk p4b

When we have small oscillations away from equilibrium, as with the pendulum we see harmonic behaviors for all of the strategies as seen above.

2013 Wolfram Technology Conference Talk p4c

However as we either add more harmonic forces or move away from small oscillations, non-linear behaviors result as shown above. We get chaotic behavior for some choices of the parameters. One can play with the details using the CDF model.

Fixed Frame Models–Complete Solutions

The other source of causal and elastic behavior occurs when the payoffs can also vary. Decision process theory provides for that possibility. There is a soluble model (numerically); the one we choose is an attack-defense model. With a suitable choice of parameters, we again observe non-linear behaviors:

2013 Wolfram Technology Conference Talk p5a

In these figures, there are four possible strategies in an attack-defense war game in which one side defends two targets (one high value one low value) and the other side can attack the two targets. The standard game sets payoffs for the four cases. We no longer assume here that the game is played at equilibrium. We assume only that the sum of strategies (actually preferences in the theory) is conserved, leaving three strategies that can vary. Along a streamline, we view the behaviors of the payoffs as functions of time and three parameters that characterize the strategies along the streamline: x, y and z in the figure. Using sliders we can see how the phase space plot changes. This model is somewhat more complicated and is too large to provide in this talk. So in this case we just provide a couple of illustrative examples.

2013 Wolfram Technology Conference Talk p5b

We see the same type of qualitative behavior as before, without making any assumptions about the form of the harmonic force; it is an outcome of the theory in this case.

Conclusions

In general, chaotic effects require non-linear behaviors. We have observed such behaviors in a decision process theory and expect to see such behaviors in realistic decision processes, including stock market behaviors. These behaviors depend on both causal and elastic effects. The new ingredient is paying attention to the elastic components of decisions: both payoffs and frequencies can vary in time and strategic position.

2013 Wolfram Technology Conference Talk p5c

Value creation, value transfer and bonding in decision making

We tend to approach decision-making as an exercise of an individual without taking into account that in many cases, our decisions are impacted by others. How do we approach that idea in decision process theory (Thomas, 2013)? One thought is to consider that decisions are based on the value or utility we ascribe to options (choices). Where does that value come from? Value for an option may be given or it can be created; value is often the fundamental driving force for a company.

A second thought is that value can be transferred from one person to another. There are several ways that can happen. In the theory of games, this transfer is “active” and a competitive exchange in the sense that what one person gains another person loses. What we have in mind here however is more along the lines of value creation than along the lines of competitive exchange. Cooperation between two or more people can result in the creation of value but still can be thought of as the exchange of value between members of the group. Though similar to competition, it is stationary; it does not change in time and is not dependent on active variables. This distinction is hidden in static theories of games. Value creation or cooperative value exchange results in a tidal bond associated either with the individual (individual value creation) or with pairs of members of the group (exchange of value from one to the other).

This concept of bonding arises naturally in decision process theory and leads to gradient forces associated with player worldviews, codes of conduct and even with time. Gradient forces have quite different characteristics from the competitive exchange forces that in form resemble Coriolis forces, which are velocity dependent along the active dimensions. What unites these three gradient forces is the concept of symmetries associated with inactive space or time directions.

These concepts are essentially geometric in nature and can be illustrated with a familiar example of a rotating sphere such as the earth. The centripetal acceleration is a gradient force on any person or object standing on the earth, which tends to push that person away from the center of the earth. In contrast, the Coriolis force is zero if the person or object is not moving; if they are moving, then the force is along a direction orthogonal to the motion and dependent on the direction and magnitude of the earth’s spin.

A second gradient force is gravity which pulls the person toward the center. We must go into the depths of Einstein’s theory however to see that the gravitational force is itself a force based on a time symmetry. For our purposes here, we note only that in any dynamic geometric theory the two possibilities occur: one for time and one for spatial symmetries. We require somewhat more mathematics to describe mixing between space and time symmetries (Thomas, 2013). Leaving aside these subtleties, we can explore the consequences of the basic ideas.

What might we expect for bonding in decision process theory? For gravity, we have a gravitational potential that is static (independent of time), so we might expect a bonding potential {{b}_{\alpha \beta }} for any pair of “worldview” or inactive strategies \{\alpha ,\beta \}, which is independent of these strategies. We have cooperative bonding when the strategies are different and value creation {{b}_{\alpha \alpha }} associated with \alpha when the strategies are the same. In our fixed frame model (Thomas, 2013, p. chapter 8) we indeed have such a field with these properties; the gradient determines the tidal acceleration potential {{\omega }_{\upsilon \alpha \beta }}={{\partial }_{\upsilon }}{{b}_{\alpha \beta }}. We have a geometric picture from the usual rules of calculus; the gradient is along the normal to the surface of constant potential. The tidal bond describes a potential field for each pair of players. Each player or code of conduct defines a symmetry transformation in the sense that none of the geometric constructs depend on the distance along the worldview (or code of conduct) direction. This extends to time as well if it too generates such a symmetry transformation.

What sorts of things can we infer from these observations? The total value creation \Theta in a decision process is the sum over the value creation for each of the players.  From the equations in decision process theory, the source for the total value creation is the energy density. Value creation reflects a tidal bond that is stronger the higher the energy density. Thus, we relate bonds (forces) to the exchange of value (energy). The potentials that are independent of the total value creation are the shear bonding potential {{\sigma }_{\alpha \beta }}={{b}_{\alpha \beta }}-{\scriptstyle{}^{1}/{}_{{{n}_{i}}}}{{h}_{\alpha \beta }}\Theta where {{h}_{\alpha \beta }} in the theory is a diagonal matrix whose elements are all -1. The “trace” of the shear is zero, so in that sense it provides information that is distinct from the total value creation. It is interesting that in decision process theory, the shear bonding potential measures strain and is determined by the stress, generalizing physical theories where stress is proportional to strain.

These thoughts are translations of the theory; it is possible to examine the equations to verify that these translations are reasonable. Since this is only a theory, you might reasonably question whether these ideas apply in the real world. I think the answer is yes. In business parlance for example, value creation is an essential aspect of any decision process. Value creation can be something emotional as creating fear or joy. It may mean suggesting consequences that are unrelated to any payoff: for example, difficulty of execution, of getting paid, etc. It may in fact raise the value of the payoff in the sense of what is at stake, such as a new technological discovery.

Cooperative bonding is working together to create value for the organization. There is no sense of competitive exchange in either value creation or cooperative bonding as used in a normal business relationship. There is a cooperative exchange that is internal to the organization, which results in the transfer of value from one member of the group to another. Thus the concepts here we argue are in fact quite distinct from game theory payoffs, even though they also involve value.

In discussing value creation, value exchange and the new distinction of bonding (tidal bonds), we should also ask how these ideas relate to the notion of engagement. In decision process theory, engagement is a measure of the flow of choices, specifically the flow along one of the inactive directions. Its value fixes how often we do something or prefer something as opposed to how much value is associated with that choice. The potential confusion is that the payoff force may be the product of this coupling and the amount exchanged: this is the case for example in decision process theory.

The distinction between engagement and bonding is helpful. In building value, it may be very useful to not be engaged. In this area, such a person’s actions would appear entitled. The tidal bond could then be very large or very small; it is independent of the engagement. A consequence might be that in a neighboring area, the engagement would necessarily grow. The difficulty in describing these possibilities is the difficulty of keeping track of all of the related changes that can or should occur based on our understanding of how decisions should work. The advantage of a detailed and consistent theory is it makes such a description prescriptive and subject to well understood computational methods.

Given these distinctions, are there ways in which we act that encourage more engagement and different ways that encourage bonding? Engagement for example is how strongly we pay attention to reality, to what others are doing and thus to the consequences. It seems to me that this does not depend very strongly on how much value has been created or is being exchanged. To increase the value of what is being created or exchanged requires a different skill set. First we have to decide whether we are improving the value that we create or we are increasing some shared value. In the latter case, we are creating a code of conduct; our shared actions work in concert as if they were a single individual. The difference is that shared actions have to be agreed upon.

So to create something of value we first deal with an individual. What immediately comes to mind are individuals, who create something that is brand new: it might be an invention, a work of art or a work of science. It is not uncommon that during the creative act, such individuals are not particularly engaged in the world around them. If we are talking about a code of conduct, then in addition to creating something of value, we require all of the individuals to subscribe to a shared set of actions that support that code: this might be called a set of ethical behaviors or a corporate culture. This requires a new skill set, usually requiring some type of rote education in the sense that the code is not subject to debate.

The point of this education is not to make others be like us but to ensure their internal payoffs are robust and ethical (the common us) and reflect our shared code of conduct. What are the effective ways to teach? I shall call them “boil and freeze”. First it is necessary as in boiling, to make all states a possibility. For children this comes about naturally as one assumes that children provide a blank slate. Next one wants to create a new symmetry (a crystal form if you like) that reflects the desired code of conduct. I think we are familiar with this process even though in its extreme form it may cause great harm. For example a war is a type of boiling, as is a cultural revolution. In businesses, there are less extreme forms such as reorganizations and forms that follow the dictum “if it ain’t broke, break it!”

To create the desired symmetry or code of conduct there are many well-known approaches. Provide a mentor or role model and require all the students to adopt that mentor’s behaviors. Provide students with many problem solving situations and allow them to come to the conclusion that a particular code of conduct is the obvious solution. For these methods to work, the student must be engaged.

Students may learn through experience only if they are strongly coupled to what happens. Their ability to engage is a mental muscle. Another mental muscle is their ability to cooperate. In this case it is not engagement but bonding. Cooperation requires pairs of people to form a bond. So students must create bonds with others: other students and with their teachers or mentors. Such bonds induce sharing and an exchange of values. There are many examples in the market place of these types of activities. To name a few examples, consider advertising and sales as attempts at creating bonds and enrollment as a way of identifying where bonds may already exist.

The other aspect of bonding is cooperative exchange. What mental muscles are required? It is again related to learning, though not necessarily the rote learning style required to adapt people to a code of conduct. This type of learning is more interactive since it means identifying something of value that someone else has, being critical of what you learn so that in fact you may create more value and may cause the teacher to be the student. This type of learning is a two-way exchange as opposed to rote learning.

The take away is that realistic decision-making requires new distinctions of bonding as well as engagement. Such distinctions are clearly a part of the normal discussion in the real market place and so must be part of any realistic theory. We don’t claim to have the only theory that has incorporated such ideas, but note that the way in which we have done the incorporation is self-consistent and provides the ability to compute the numerical behaviors of these attributes over time.

So we can look at the numerical behavior of two quite different concepts. One is the concept of energy density or its equivalent pressure, that essentially act like the potentials associated with the time symmetry of the fixed frame model.

pressure surface

The picture is particularly nice since it makes the analogy with the earth even more explicit, while also showing that there can be interesting differences. The shape is not quite a sphere, for the particular model assumptions used. There are some indications of additional structures. The fixed frame model used here describes an attack-defense model (Thomas, 2013, p. chapter 8) in which each player can choose between two strategies. We choose one code of conduct strategy which generalizes the game theory notion of a zero-sum game. Though code of conduct makes the choice sound noble, it may not be as in this example of a war game. This leaves three active strategies: one attack strategy (“first player”), one defense strategy (“second player”) and one strategy that measures the intensity of each player (the difference of the sums of each of their strategies, here plotted in the vertical direction). See the reference for details. It is not true, but approximately true that the normal to the surface gives the gradient force associated with the time symmetry (gravity).

total value created surface

The next picture provides the total value creation in the same model. This provides a first view of how the overall tidal bonds reflect the energy density (pressure) distribution. The reference provides detailed information on how the various theoretical parameters are related to these pictures, which summarize the conceptual distinctions that are new. The pictures show how we envision making these distinctions quantitatively useful. Just as gravity as an idea becomes more useful when we associate a force with the concept using the notion of the gravitational potential, we gain more than just qualitative insight by identifying the bonding potentials and their associated forces.

Decision making chaos and determinism

This is an inquiry into decision-making and its connection to uncertainty. It is based on the white paper with the same title. Decision making is one of the most human acts and seems to be the most difficult area to formalize into a theory of behaviors that are causal and deterministic. In fact one might think that the very nature of decision-making is one of chance and uncertainty. One issue we think relevant is the general lack of understanding of causal theories and how they deal with uncertainty. Moreover, in our view, there is insufficient appreciation of the sensitivity of the initial conditions that determine future behaviors. When these issues are taken into account, it becomes easier to see the possibility for causal formal theories of human decision-making.

Consider the sensitivity of future behaviors on initial conditions, which has been extensively studied under the general category of chaos and chaos theory. It has been said in the past that chaos represents for humans the way we perceive the world in its un-ordered state. If we had perfect information, so the argument goes, we would have perfect determinism. Slight disturbances on what we think we know, lead to unknown consequences, even in a theory that is strictly causal and deterministic. So “what is deterministic?” It seems unreasonable to believe that because chaos behavior is possible, we must throw out our causal theories. They work very well and explain a host of data. We believe that a more reasonable approach is that we must be more careful about what we claim to learn from these causal theories.

The theories after all reflect our efforts to identify concepts and attributes that don’t change with time or that change with time in an understandable, causal and continuous way. One thing to explore is how we deal with uncertainty in such mundane activities as measurements, which form the basis of all physical theories.

So for example we understand by length, an attribute that characterizes the height, width or length of something. It is a great accomplishment in understanding to separate this concept from the mechanisms by which we perform the measurement. The mechanisms involve a measuring stick and us as an agent; for those of you measuring a basement, you know that multiple measurements yield multiple answers. Yet we are confident that the basement has well-defined dimensions. How did we come to this conclusion and how did we learn to separate out the uncertainties associated with us as agent and the intermediary of a measurement stick from the invariant concept of length? Today, we all agree that the separation has been done and we are comfortable with the idea of length.

Similarly, we are comfortable with the concept of time, despite our dependence of using clocks to make time measurements. From such simplistic considerations, we have adopted over many centuries, physical theories of the behavior of matter that we depend on. For example, we are comfortable with a host of physics problems that relate distances objects travel with time. We believe we understand how a pendulum works because we can predict the behavior starting from a description in which we describe its restoring force as being the source of the acceleration. The behavior is the set of positions of the pendulum over time. We start the pendulum at rest and “drive” it by a harmonic force. We predict from Newton’s theory where the pendulum will be at any future moment. We compare where the pendulum is by measurements against where it is predicted to be and find agreement to a high degree of accuracy.

This model, because of the non-linear behavior of the force, generates unexpected structure. In engineering and business, there are also distinct ways to gain access to a system’s non-linear characteristics. For the pendulum, one can initiate the behavior by varying the initial conditions. Alternatively, one can “drive” the behavior by applying an external force. For example we might impose an external force characterized by a single amplitude and a single frequency. As we vary the frequency and amplitude we stress the non-linear structures of the problem. For sufficiently large amplitudes we generate chaotic structures: we go from a quasi-periodic structure to one that no longer appears periodic. We create behaviors that appear much more erratic and lack the periodic behaviors seen with smaller driver amplitudes. The idea is that these properties may in fact carry over into the realm of decision-making.

We expect that decision-making has attributes that involve imperfect information as well as perfect information. The challenge is to identify each of these, separating out those attributes that have a predictable behavior from those attributes that are inherently uncertain. We adopt game theory (Von Neumann & Morgenstern, 1944) in which an intrinsic view of decisions is a productive starting point where we separate out the pure strategies as things of permanent interest. A pure strategy is the complete set of moves one would carry out in a decision process taking into account the moves of all of the other players or agents in the process along with any physical or chance effects that might occur. It is a complete accounting of what you would do, a complete plan given every conceivable condition. It is furthermore assumed that you can approximate this complete list with a relatively small list of pure strategies.

Just because there are pure strategies, there is no reason to believe that one of these pure strategies is the right choice to make. If you are in a competitive situation, there may be a downside to your competitor knowing that you will pick one of these strategies. The solution is to “hide” your choice by picking the pure strategies with a specific frequency. The theory determines for you what these frequencies are without informing your opponent which choice you actually will make on any given play.

Thus your decision choice is a specific frequency choice and in that sense represents the measurement of “length” despite the fact that in a real decision process, like a real measurement process, there are lots of uncertainties. You would like to determine the frequency choices the other players make and they want to understand your choices. We emphasize that knowing these frequency choices is not the same thing as knowing what you will actually do on a given play. We take this knowledge in the same way we take the knowledge about the size of our basement. We know how to get a good approximate set of measurements. We know that out basement has a size. For each measurement process we don’t know what size we will actually get.

We extend game theory to decision process theory (Thomas G. H., Geometry, Language and Strategy, 2006) in which the strategy frequencies vary with time. This theory predicts future frequency values based on a given set of initial conditions. The theory is causal in this sense, without actually dictating what a player will actually do at any given moment. The basis of the theory has some similarities to physics and even more underlying similarities to mathematical models of physical processes. Just as in physics, there can be external forces that dictate how the rates of change of frequencies change in time. There will be stationary situations in which these rates of change don’t change, in which the forces generating such changes are zero. We equate that scenario with the whole literature of game theory and its consideration of static games: the frequencies are fixed numbers. Static games provide an important foundation for our approach, though our results diverge once dynamic effects are included.

A second scenario is one in which the fields that generate the forces are static, but the flows, the rates of change of the frequencies, are dynamic. The flows may depend on what other players are doing, and so we can distinguish a special subset of flows that are stationary: at a specific “location”, the flow doesn’t change. However, if you follow the streamline of the flow you will follow a path that changes in time. You might think of a weather pattern that is stationary, in which the wind at any position is constant in speed and direction. If you follow a path along the wind by adding smoke, you will see that the smoke follows a streamline that moves with time. These considerations are really rather similar to the pendulum problem.

Based on these considerations and verifying our ideas from a variety of numerical examples, we argue that behaviors from decision process theory are deterministic, yet represent the uncertainty of choices based on frequency. We can separate out from the decision process the uncertain aspect of the decision, whose future behavior is unknown: we don’t know which pure strategy will be chosen. Thus we identify that aspect of the decision process that deprives us of perfect information. We also identify those aspects of decisions processes that might evolve continuously in time and can be determined in a causal manner. These are the numerical frequencies of choice that form the basis of the choice, but don’t actually determine the specific choice at any given time. Our theory is then about the frequencies and not about the choices.

This is not the end of the story. Just because we have a theory that determines future behavior based on knowledge of past behavior, we are not justified in assuming that the predictions will be insensitive to our starting point. Non-chaotic behavior assumes that the future behavior is not sensitive to small changes in the starting point. This often follows from theories that are linear in nature. Chaotic behavior by contrast expects small behaviors to generate large behavioral differences, even if that behavior ultimately stays bounded. Over time however, we expect to see significant deviations. Some of the non-linear behavior is a consequence that preferences can’t grow without limits. We postulate that concept here, but in fact do see evidence for that behavior in the full theory.

We expect that chaotic behaviors can be generated from within, without recourse to external “drivers’, if there are suitable parameters that can be varied. For the pendulum, the suitable variable would be the initial speed. The initial flows and payoffs are suitable variables for decision processes. The chaotic behavior is a result of the non-linear nature of the forces and can be made visible with a “driver” representing external periodic forces. It is then a matter of whether the amplitudes and frequencies excite the underlying structures. In fact from both, seemingly benign behaviors as a steady state need not indicate the lack of interesting structures. The key is how to excite these structures into existence.