# Tyrant Model using Decision Process Theory

A classic game theory model for a tyrant might be a game of chicken, with a tyrant against the poor, each choosing between swerving or crashing. The difference however is that a tyrant has more power and influence than the poor. One aspect of game theory that we believe needs to be changed is that the size of the payoffs should matter.

As an example of something that plays no role in game theory, consider the concept of engagement. Multiplying the payoff matrix by the engagement does not change the max-min solution of game theory and is thus of no importance. Moreover, the payoff matrix strength is not relevant either. In physics, this is like saying that the charge of a particle does not influence its motion in a magnetic field, nor does the strength of the magnetic field. This is only true for a charge moving parallel to the field. The circulation around the field clearly depends on the charge as well as on the strength of the field.

The hard part is to disentangle the charge from the field strength. This is accomplished in physics by the field equations that say that charges and currents are the sources of the fields. Strong engagement leads to changes in the players behaviors through changes in their payoff and valuation fields. What is amazing however, is that engagement like viscosity enters the decision process theory equations in a non-linear way. Therefore, dramatically increasing or decreasing the engagement changes the qualitative behavior of the decision flow. You get an inkling of why when you realize that the engagement is itself a conserved flow and therefore contributes to the energy momentum of the system. It thus gets shared with other component parts.

Like viscosity, when the engagement is small it has almost no effect other than adjusting the time scale. As it gets larger, the non-linear nature of its behavior becomes evident. It is even possible that we might get “chaotic” or “turbulent” behaviors, though there is currently no basis for this conjecture. The following is the result of a computation (using a decision engineering notebook described in the Stationary Ownership Model Update white paper and implemented using a Mathematica notebook) in which the poor outnumber the tyrant by 10:1 and the engagement of the tyrant to the poor is 1:10.

We have the following interpretation of this picture. From a game theory perspective, the model has two Nash equilibriums. The poor see their strategy to be “swerve” and assume that the wealthy will “crash”. The wealthy see the opposite. This is one variant of the classic game of chicken. It is clear in the figure that by assuming the wealthy are much less engaged, we are closer to one of the Nash equilibrium: we see the Nash equilibrium of the poor, which is to “swerve” and for the wealthy to “crash”.

We see more however. Because of the inequality between the two players, the preference of the poor to crash is going to be smaller over time than for the wealthy to crash. So, in some sense, the wealthy crashing is not as devastating as it is for the poor. There is much less incentive for the wealthy to avoid the choice “swerve”.

The strategic flows add some interesting aspects to the story. The magnitude of the flows in this model run are all comparable. Nevertheless, they support the fact that the net behavior for the wealthy to swerve is zero and the net behavior of the wealthy to crash is small and positive. It is also clear that the net flow for the poor to swerve is much bigger than any of the other net flows.

We interpret this as follows. The poor will suffer and avoid the game of chicken. On average they will swerve. The wealthy will crash, making what appears to be the correct assumption that the poor will always swerve. These assumptions, however, rely on the small engagement of the wealthy. This appears to be the trademark of the tyrant They suffer no consequence for their action and are thus rewarded. The full decision process theory would go on to predict that the payoffs for the wealthy should not change since their source is small. The poor however might change their payoffs over time to better reflect what is happening.

Is there anything that can be done? One possibility is demand an equality of effort between the tyrant and the people, through laws, the courts or the press, as examples. An example computation is provided below, using a gravitational like field to force equality by imagining a potential well that is centered when the strategic effort of the poor matches the strategic effort of the tyrant.

The model generates a time component of the metric, which can be thought of as a “gravitational” field that pulls all strategies to this common center. We see the consequence of this in the above figure. We now expect that the tyrant will see consequences to their action, despite their initial lack of engagement. In fact it might be that that engagement will also change as a consequence.

# WTC-2014: Visualizing behaviors in differential geometries using Mathematica

Introduction

This talk was given at the Wolfram Technology Conference in October 2014 at Champagne, Illinois. The idea of the talk was to explore how one visualizes behaviors in differential geometries that are more complex than the usual three-dimensional flat geometries we are familiar with.

Abstract

We use Mathematica to visualize such behaviors. Our access to these behaviors are the partial differential equations that describe the shape of the geometry; these equations provide us a way to track the shortest paths. Shortest path algorithms are familiar from Newtonian mechanics, geometrical optics, Maxwell’s theory of electromagnetism and Einstein’s theory of relativity. My particular interest has been the possibility of using the shortest path algorithms of differential geometry to solve problems in the social sciences; in particular to gain insight into the causal behavior of decision processes. In this case, the starting point is the conjecture that decisions are events that form a continuous manifold in both time (causality) and space (strategies). This is a fundamentally different starting point from the usual stochastic approaches used in the social sciences. The resultant behaviors are shortest paths in a differential geometry space.

The challenge is to visualize these behaviors that are consequences of the differential geometry. This talk addresses that challenge. We apply Mathematica solutions to the partial differential equations and look at limit cycles and limit surfaces and their relationship to harmonic solutions and chaotic behaviors. We proceed with model solutions using Mathematica of more complicated differential geometry possibilities that have applications to gravitational theories and decision theories. We find that the NDSolve and Manipulate functionality can be used to great advantage in order to visualize the behaviors in these geometries. There are many other applications of these visualizations, such as teaching possibilities in Electro-Magnetism courses.

Outline of talk

• Decisions as Geometry
• Take an example www (work-wealth-wisdom) model
• What is time? What are space components? How is uncertainty dealt with?
• Idiosyncratic behavior and symmetry
• Can steady state be chaotic? Jupiter red spot!
• Examples from WWW–rotations, Coriolis, magnetic fields and payoffs
• Need to solve complex stationary problem
• Use harmonic series approximation
• Do we exclude chaotic behaviors? See toy model . Build on this with harmonic solutions
• Decision geometry is more complicated than in physics
• Two different transformations to stationary frame–two versions of time flow
• Communication travels at a finite speed
• There are communication ellipses

Decisions as Geometry

There are many examples of geometry in the physical world, starting with the most familiar, the geometry of the earth. Many theories, such as Newtonian mechanics use geometry in a variety of contexts that extend the notion of purely flat space to spaces with curvature. The most striking example is perhaps the theory of Einstein, which maintains that our “normal” space is 3+1 curved dimensions. The specific behaviors of geometry that we find interesting are:

• Position and Uncertainty
• The nature of time
• Symmetries

These are particularly interesting to us because of the possibility of using geometry to describe how we make ordinary decisions in the business and economic world. We have presented these ideas at past Wolfram Technology Conferences; many more details can be found on my website.

Position and uncertainty: Let’s deal with position and uncertainty first. At least some of us think of position as described in physical laws as certain, and think of “soft” concepts such as decisions as inherently uncertain. That is perhaps not the full story. It is true that at least starting with Newtonian physics, the concept of position is considered as something that is well-defined and certain. However, the measurement of position is not. Anyone that has measured a room knows that every time you make the measurement you get a slightly different number. Thus the key insight in Newtonian mechanics is to separate the concept of position from measurement: the certain concept from the concept that has a degree of intrinsic uncertainty.

We suggest a similar type of split is necessary for considering decision making, a split that has been made in the Theory of Games. The concept of utility can be applied to preferences as a well-defined concept of certainty. However the measurement of preferences involves observing the frequency with which we make our choices. For any given decision point, we make one choice out of several, which is uncertain. Nevertheless our preferences, like position, might be considered as certain. As with game theory, we take as a point in space, the collective strategic preferences of each decision maker.

Time: Let’s deal with time next. In physical processes, we can think of time as Newtonian, meaning we ascribe an absolute time as being applicable to all events happening anywhere throughout the universe. Alternatively, we can think of time as Mawellian, meaning we ascribe a local time to the events happening in our vicinity, and potentially a different time for events happening elsewhere. It seems to be a less efficient way of talking about time, but does allow for the fundamental fact in Maxwell’s description of light that light travels at the same speed regardless of frame of reference. Einstein took up this argument and concluded that in this respect Newtonian mechanics needed to be modified to conform with Maxwell’s description.

As a rule, we describe physical processes as causal in nature. Once we agree on a definition of time, we incorporate that description into the way we see the world. Events happen in succession with earlier events having the possibility of influencing later events in a strict cause and effect relationship. Not all events have such a relationship. Statistical events may occur in such a way that later events are independent of all earlier events. A particular type of such an event is stochastic. Wikipedia says: “In probability theory, a purely stochastic system is one whose state is non-deterministic (i.e., “random”) so that the subsequent state of the system is determined probabilistically. Any system or process that must be analyzed using probability theory is stochastic at least in part.[1][2] Stochastic systems and processes play a fundamental role in mathematical models of phenomena in many fields of science, engineering, and economics.”

What stance do we take? Do we assume a causal behavior or a stochastic behavior? The current literature favors the idea that decisions reflect stochastic processes. We take the opposite point of view: We follow the lead from those that have applied Systems Dynamics to a variety of real world problems and conclude that many real world problems can be treated as causal. Having taken this stance, the next question is whether we take a Newtonian view or a Maxwellian view. We have studied decisions starting from the formulations of the Theory of Games and note that the underlying concept of payoffs closely parallels the concept of magnetic fields and so adopt the Maxwellian point of view.

The key conclusions we make for our study of geometry of decisions is that it be based on positions as determined by strategic preferences and time as a local variable subject to the constraint that information move at a speed that is locally the same to every observer in the system. This is similar to the geometries envisioned by Einstein in his theory of general relativity. It is much broader however in that the number of dimensions is not limited, but depends on the number of pure strategies available to the collective of decision makers. The local nature of space and time so defined relies on the existence at each point of a set of potential fields, one for each dimension of space and time. A surface of constant potential is one on which we have a single well-defined value for the amount of distance or time is appropriate to that coordinate. We can call these coordinate surfaces. On a map of the earth, these would be the lines of constant latitude or constant longitude; these have a meaning on a flat chart despite the fact that the lines represent a spherical (or oblate spherical) earth. In higher dimensions these lines we call hyper-surfaces, or simply surfaces.

Symmetry and idiosyncratic behavior: Common physical systems often exhibit symmetries that are an essential part of how we think of them. The earth is symmetric about its axis; indeed it is almost spherically symmetric. Such symmetries are reflected in the way we describe the geometry. Let’s pursue as an example the axial symmetry of the earth. The oblate character is reflected in the behavior of the latitudes and the symmetry is reflected in the way we treat the longitude. There is no difference in the shape of the earth at different longitudes. Newtonian physics provides deeper insights: there is a theorem from physics and geometry that to each symmetry there is a conserved motion, which in this case is the angular momentum. It can be shown that the law of sines in trigonometry is a consequence of this conservation law. It is a further attribute of differential geometries that there will also be two associated forces: a centripetal scalar force and a Coriolis vector force. Identifying symmetries provides substantial insight into the geometry. In many cases, the symmetries are not immediately evident, though the attributes are. Locally, we may notice the Coriolis forces before we notice that the earth is not flat.

We also believe that there are important “symmetries” in decision-making, though as with the curvature of the earth, it may be that the attributes or consequences of the symmetries are more noticeable than the symmetries themselves. We find it notable that the game theory approach to decision-making uses the Nash equilibrium, a static scenario in which each player considers his or her version of the game, considers all the worst consequences that can happen, and from those chooses the best of the worst. It is often framed as a max-min choice. It is an equilibrium in the sense that every player makes the same type of choice, and no other choice works better for any player.

The idea we take away from this is that first, it is idiosyncratic: it is based on each player’s personal view of the utilities. Second, the max-min choice has relevant mathematical connections to geometry: the dynamic path or motion in a rotating frame has one “equilibrium” direction parallel to the rotation axis, which can be framed as a max-min solution to motion. Motion in any other direction will be influenced by apparent or Coriolis forces. For this talk, we don’t go into detail other than quoting the result that if we associate with each player a symmetry, then the consequence will be that there will be a game matrix associated with that player that dictates the relative utilities and there will be an equilibrium set by a max-min choice.

WWW model: with these preliminaries of geometry out of the way we can briefly describe the decision model we will use as an example of complicated geometries. We call it the Work-Wealth-Wisdom model. The various figures or CDF models, such as the one above, have been computed based on the explicit model and numerical values chosen.

One common way in game theory to formulate a decision is to specify a matrix that indicates the payoffs. The rows correspond to the choices of one player and the columns the other player. From a mathematical perspective, this can be reformulated as a symmetric game in which the rows specify the choice of every player plus one additional row called the hedge strategy, and the columns also specify the choice of every player and one additional column called the hedge strategy.

This is the symmetric version of the payoff strategy for the WWW model. The model shown here is totally illustrative and the numerical values could be chosen quite differently. There are two players: the work-player and the wealth-player. Wisdom is how they play the game. The order of the rows (columns) reflect the wealth-player (player 2) strategy to invest wealth into the economy or to collect rent, then the work-player (player 1) strategy to earn or to “take” money, then finally the “hedge” strategy that we identify with time.

The model incorporates the idea that wealth has 10 times more value than the work and the consequence is that there are 10 times more workers than wealthy. The payoff to the worker is 1 if their choice is “work” and the wealth choice is “invest”. The worker gets nothing if the wealth choice is to collect rent. The payoff to the worker is 0 if the worker chooses to “take” and the wealth choice is “invest”. The payoff is 1/10 if the wealth choice is to “rent”. In the sense of game theory, no pure choice forms an equilibrium. The best choice for the work-player is to pick either choice with a 10:1 choice favoring “take”. The best choice for the wealth-player is a 10:1 choice favoring “collect rent”. From the standpoint of game theory, this is where we stop, except that we have a second game matrix for the wealth-player that is in principle independent. That is the sense that the choices are idiosyncratic.

In a decision geometry, the symmetric payoff matrix is in fact a rotation in the space of strategic decisions consisting of the two choices for the work-player, two choices for the wealth-player, two hidden dimensions reflecting the symmetry which generates each of the two payoff matrices and time. The space is seven dimensional, in which only five of the dimensions are “active” or not hidden. In our model, we assume that the sum over all the active strategies is a symmetry, leaving us with a four dimensional world in which three are “space” and one dimension is “time”. The model is thus parallel in some respects to the physical world we live in. This is purely for numerical convenience and helps us visualize what is going on more easily.

Geometry: It is worth noting that the concept of payoffs arises entirely from an inquiry into utility and economic behaviors. However from a mathematical point of view, we see it as a hint of an underlying geometry. It is possible to use mathematics to relate the payoffs to a concept of distance between points in this strategic space-time. We thus move the ideas above from a concept of coordinate strategic surfaces to the concept of a function of such coordinates that define a distance function on the space-time manifold. The payoffs are then just one aspect of this distance function. This is the context in which we view decisions as geometry.

Visualization: The resultant geometry is one whose structure changes over time in a well-defined way. Our challenge is to describe that change in a way that makes sense and can be pictured. An example may help: the surface of the ocean is an example of something whose shape changes as a function of time. On the one hand, in the normal coordinates of viewing, we see the shape constantly changing. On the other hand, there are behaviors that can be more simply described if we imagine we are a cork riding on the surface: from our co-moving perspective everything is stationary, though we see structures that when related to the normal coordinates appear as dynamic structures such as waves. We find the same is true in our general geometry. There are many types of solutions, some of which allow a co-moving description that is stationary. Though stationary or steady-state, the motion described is still dynamically interesting: we get standing waves for example. The steady-state motion is what happens after all transient effects die out. For the WWW model, we have positions in the co-moving frame that we call x, y and z as well as a proper time ${\tau}$. These relate back to normal coordinates of $\left\{ {{u}^{1}},{{u}^{2}},{{u}^{3}},t \right\}$. This will be our first type of visualization.

So for example with x and y constrained to be on a circle with fixed radius and arbitrary angle (a cylinder), we can ask what that behavior looks like in normal coordinates. That is shown above. We can also look at the velocity flow, ${{u}^{3}}$ against t, as contours as shown below. For the WWW model, the income inequality generates a corresponding velocity. What are the normal coordinates here? The direction ${{u}^{1}}$ is the relative preference for “work” versus “take”; the direction ${{u}^{2}}$ is the relative preference for “invest” versus “rent collection”; the direction ${{u}^{3}}$ is the relative intensity with which each player plays the game, work minus wealth. So in the graph below, the vertical axis is time and the horizontal axis is ${{u}^{3}}$. The strong direction along the horizontal axis reflects the increase in workers needed because of the assignment of value to wealth. This was put in as an initial condition; the model then shows the consequence of this assumption over time. Note that co-moving time contours are not constant in normal time. This reflects our way of dealing with time as similar to that of Maxwellian and not Newtonian theories.

One way to characterize a geometry is by how one measures distances. For example we may use flat maps to plot a course over the ocean. On this map, the latitude and longitude may appear as orthogonal straight lines. However for an accurate course, we need to know how to relate these lines to distance; if we go far enough the distance is not Euclidean since the earth is approximately an oblate spheroid. Another way to characterize a geometry is to plot the paths that represent the shortest distance between two points. It is derived from the distance measure and reflects the curvature of the space. Thus, we may study the complexity of the geometry either by the shape of the surface or by the shortest paths. A complicated geometry is one in which the paths are complex. An example from weather might suggest what we have in mind: a hurricane or tornado both have cyclonic paths of wind, suggesting chaotic behavior. Ordinarily we associate chaotic behavior to the time dependence of the flow. Yet we can have cyclonic behavior that appears stationary. A good example of such a flow is the “spot” on Jupiter that has been around for centuries. In this talk we focus on such steady-state behaviors that remain after any transient effects die out. Of course there will also be behaviors associated with the transients, but they are the subject of a future investigation.

We suggest that the origin of complex behaviors, such as chaotic behaviors, are similar in different domains of study, be they weather phenomena, electromagnetic phenomena or decision phenomena. In each case there are symmetries that give rise to Coriolis forces, magnetic forces or payoff forces respectively. These forces arise from common differential geometry structures.

To study these structures we would start with the fact that the cyclonic behaviors are the result of rotations and such rotations can be investigated by looking at the fields that generate the rotations. The following shows a “phase space” plot of the fields from the WWW model that are used to generate the overall payoff matrix.

This field is anything but simple. The CDF picture can be rotated showing the type of complexity one might get. For the experts, we note that this picture is the set of limit surfaces for the z component of the gauge field, along with the partial derivatives of that gauge field along the other two transverse directions x and y. A sphere would be the 3-dimensional analog of a circular phase space plot in 2-dimensions , which is quite different from what is shown here.

Of course the solutions that we obtain for a geometry may depend on the assumptions used to do the numerical work. So we take a step back and analyze the types of assumptions that we have made. Our tentative conclusion is that we can apply a “harmonic-series” approximation and still identify chaotic solutions. This follows from a technical discussion using Mathematica as a computational tool.

Consider a gravitational source equation with non-linear terms:

${{g}''\left( z \right)+\sin g\left( z \right)=F\left( z \right)}$

The source, which is the first figure above (green curve) is periodic and for illustration is a saw-tooth pattern. We scale it by a factor, init, which we vary. The equation is solved using NDSolve from Mathematica using periodic boundary conditions. It uses the method of lines to compute the function, red curve, and its derivative, the blue curve. For a sufficiently small factor value of init, the phase space plot shown in purple is a closed cycle. However past a critical value, the non-linear nature of the equation asserts itself and we cease to have  closed cycle. We suggest this is the start of chaotic behavior, though we haven’t used a rigorous definition to establish this suggestion yet. The factor value chosen in this figure is init=1.7.

Use Manipulate to make chaotic behavior visible

The Manipulate function in Mathematica is a way to explore when the phase space plot ceases to be a limit cycle. We have considered different source terms, and in particular explored approximating the saw-tooth behavior as a Fourier series with a finite number of terms.

We need not extend the harmonic approximation to the solution. As noted above we use the methods of lines to obtain our full result. For more general geometries, we have also use periodic boundary conditions for all but one space direction. For time, we have considered a single Fourier frequency. For the remaining space direction we use NDSolve and the methods of lines.

General geometries however, may impose differential constraints on initial conditions. On the initial boundary we may have to solve a differential equation in one or more dimensions. The boundary conditions we may need to impose may also be periodic. Now however, the method of lines may not work if there are two or more independent coordinates. So what do we do?

Our solution was to expand the unknown functions as a harmonic series with unknown coefficients. We then substitute the function and its derivatives into the differential equations and use FindRoot to obtain the numerical values of the coefficients. In one dimension we can also use the method of lines as a comparison.

So, returning to the simple gravity source equation above, with a clearly chaotic behavior, purple curve, we also solve the same system using the harmonic approximation, green curve. Not surprisingly, the harmonic approximation is always a limit cycle. We see that the cycle fits into the more general solution. By changing the start point for FindRoot, we have found other limit cycles. So the chaotic behavior manifests itself in the harmonic approximation as multiple solutions.

For the decision geometry WWW model, using the harmonic series approximation, we get the limit surface, that we described earlier:

That it is a surface and not a hyper-curve is a consequence of it being a harmonic series approximation. However, the specific surface may depend on the “sources”.

Steady-state analysis: We can now describe more fully the steady state analysis that we perform for differential geometries. We illustrate the technique using an example in one time and one space dimension:

$-{{\partial }_{t}}^{2}{{x}^{a}}+{{\partial }_{z}}^{2}{{x}^{a}}+A{{\partial }_{t}}{{\partial }_{z}}{{x}^{a}}+B{{x}^{a}}=0$

This form of a “wave equation” is typical of the harmonic gauge choice that is always possible to make in a differential geometry. The concept of steady-state is similar to that in electrical and mechanical engineering: As long as the functions A and B are functions only of the space variable z, we can replace the coordinate function with the product of a harmonic in time and an unknown function of space:

${{x}^{a}}={{g}^{a}}{{e}^{i\omega \tau }}$

In other words, the harmonic gauge is then a linear equation in the coordinate. This reduces the partial differential equations to one of lower degree, one that is elliptic and depends on the frequency chosen. It has the form:

${{{\omega }^{2}}g+{{\partial }_{z}}^{2}g+i\omega A{{\partial }_{z}}g+Bg=0}$

When there is more than one spatial dimension, the coefficients A and B depend on the coordinate so these become coupled elliptic partial differential equations. An example is the three spatial dimension model with one time:

This steady-state solution could be viewed as a solution to a model from general relativity, decision process theory, or in fact any differential geometry model with three space and one time dimension. The boundary and initial conditions, however, will be specific to the problem domain.

We solve these equations using NDSolve, periodic boundary conditions for all but one spatial direction, and use the method of lines for the remaining direction. Constraint differential equations for the coefficients on the initial surface (all spatial directions but that reserved for the method of lines) will be solved using the harmonic series approximation to ensure that the coefficients remain periodic on the boundary.

Based on our investigations to date, we expect that the resultant solutions have the possibility of exhibiting not only “linear” and “near linear” solutions corresponding to simple limit surfaces, but more complex behaviors corresponding to chaotic-type behaviors such as space filling curves. We hope to be able to investigate what we believe is a quite rich structure embedded in these differential geometries. Our remaining examples will be taken from decision process theory, and in particular from the WWW decision model. We will focus on only two aspects out of what is in fact a vast wealth of possibilities: time and communication.

Decision geometry is more complicated than Newtonian physics

We start with a consideration of time and what it means in a differential geometry that consists of a generalized notion of time and space. We have a number of questions:

• Which way does time flow? Yes we know it increases and we know it moves in some sense orthogonal to space, but in which direction is that?
• Do we define the direction of time flow along the normal to the surface of constant time?
• Do we define the direction of time as being orthogonal to a spatial volume element? The volume, $dx\wedge dy\wedge dz$, is the wedge product generalization to the “cross product”
• Are these two directions the same or different in a general geometry
• Which way are they in the WWW model?
• How does the velocity we call $now-\beta$ help us understand and visualize the distinctions?

Time direction: In Newtonian physics, the physics that perhaps comes closest to the way most of us were raised, time clearly moves forward and never backward. Moreover, it is clear that time is orthogonal to all of the directions of space: up and down; forward and backward; left and right. Starting from this perspective, we construct our geometry, which is Euclidean in nature. Time flows and has a value that is the same everywhere in space. We have no real evidence of course that this is true. In fact, modern physics experiments demonstrate that this is not true. It is not that our ideas are unfounded, but that we have attempted to extrapolate our ideas too far beyond what we have observed.

We observe time locally, just as we measure distances in space locally. In our current location, what we mean by a coordinate is a value that is the same in a small region around where we are, at the current time. What we mean by time is a small region around where we are where a common clock suffices to provide a value for the current time. Putting these ideas into mathematics is brings us to differential geometry. In this discipline, we talk about a surface on which a coordinate is constant. Such surfaces exist at each point and extend to a region around that point. The region is not arbitrarily large however; it depends on the curvature and topology of the space. For a Euclidean space-time, the region does in fact extend everywhere. For curved space this is not the case.

Normal time: From this mathematical perspective, we might then define the direction of time at each point as being along the normal to the coordinate surface associated with time at that point. For the steady-state WWW model, we can say more. The flow in that model is stationary in the co-moving frame. So at each point, we can say that the normal of the time surface is along that flow; the flow is moving at a certain velocity, which can be normalized as the ratio of the co-moving space component of flow divided by the co-moving time component of flow:

${{\beta }_{\upsilon }}=\frac{{{E}^{t}}_{\upsilon }}{\sqrt{1+{{e}^{\alpha }}{{e}_{\alpha }}}{{E}^{t}}_{o}}$

This is the $now-\beta$. The additional square root in the denominator accounts for the energy due to the inactive component of flows that are associated with the idiosyncratic symmetries. The subscript in the numerator reflects the co-moving space components; the subscript in the denominator reflects the co-moving time component of flow.

We denote the co-moving directions as x, y and z; z is the direction in which we use the method of lines and the other two directions are transverse in which we use periodic boundary conditions. It is thus very natural to consider not x and y, but the radial directions such that

$x=r\cos \theta ,y=r\sin \theta$

Using this notation, we can fix the radius to a number and look at solutions for all angles. We thus map a cylinder in the co-moving space into a two-dimensional surface in the “velocity” space associated with one concept of time.

The image above shows this surface for r=0.2 and z=0.05; the CDF file provides an interactive way to view the same surface.

This figure shows the “velocity” surface at a different point, r=0.2 and z=-0.20. The two surfaces are quite different. If the flow were the same at every point, we would conclude that the geometry and time flow reflect some simple property such as the system is moving uniformly without rotation. This is clearly not the case.

Orthogonal time: Now let’s pursue a different approach, which is to consider that time flows in a direction normal to the space. To give this meaning, note that we are used to considering a small volume of space at each point and associating physical facts to such volume; the volume itself, the mass contained on that volume, the total utility associated with that volume, etc. Differential geometry also considers volumes, starting from the notion of a surface spanned by two coordinates, say x and y. The normal to the surface for x is a differential change dx; similarly the normal to the surface for y is dy. The surface area however spanned by dx and dy is related to the product, called the wedge product between the two: $dx\wedge dy$. It is the anti-symmetric product of the two vectors. The volume is the wedge product of any three independent normals: $dx\wedge dy\wedge dz$. It is the anti-symmetric product of all three.

In differential geometry, we form the dual to any wedge product by a rule that constructs a totally anti-symmetric product of the given wedge product and adjoining all the remaining directions in an anti-symmetric way. In a four dimensional space-time, the dual of the volume is a one dimensional vector:

$*dx\wedge dy\wedge dz={{\varepsilon }_{\alpha 123}}d{{x}^{1}}d{{x}^{2}}d{{x}^{3}}$

In general, hyper-volumes are described by forms. They generalize the cross product to higher dimensions. In particular, the direction orthogonal to the volume also provides a direction for time, “orthogonal time”. We can move along orthogonal time to get to the co-moving frame, but we must use the dual flows, suitably defined with the above dual operator: $now-*\beta$:

${*{{\beta }^{\upsilon }}=\frac{{{E}_{t}}^{\upsilon }}{\sqrt{1+{{e}^{\alpha }}{{e}_{\alpha }}}{{E}_{t}}^{o}}}$

The orthogonal time gives a velocity surface above for the $now-*\beta$ that is quite different from before for r=0.2 and z=0.05. It visually evident from the shape and not just from the numbers; though the numbers are quite different as well.

As before, the shapes are not the same at different points. Here is the point r=0.2 and z=-0.20. It is instructive to rotate these shapes and explore their detailed characteristics.

We seem to conclude that time is not an absolute, but a quantity that has some substance. It can be viewed in a variety of ways and shows different characteristics from what we are used to from Newtonian physics. The images here from Mathematica help visualize these differences. They arise from the different ways we can view the flow to a co-moving frame.

Communication has a finite speed

Closely associated with time is the speed with which events communicate with each other. We certainly know that this communication is not instantaneous. Nevertheless, we have that idea from Newtonian physics and it is built into our way of thinking. It is hard to contemplate the consequences that communication has a finite speed. Communication is information flow and certainly a great deal has been written on that subject. From a causal point of view, what can we say?

From our study of physics, we learn that Maxwell had it right about communication. Light, in that theory, always travels at the same speed. This is surprising since it means a beam of light originating from a train travels no faster or slower than one on land as the train passes it. This is not in conformance with our notions of relative speed from Newton. It was this idea that was modified by Einstein. It certainly impacts our way of thinking about differential geometries, since some will behave in a Newtonian fashion and others in a Maxwellian fashion. It is our choice to pick the latter.

The consequence is that we view space as Riemannian and that there will be an analog of “light” that will travel at the maximum speed, corresponding to a zero path length:

$d{{\tau }^{2}}={{g}_{mn}}d{{u}^{m}}d{{u}^{n}}+2{{g}_{mt}}d{{u}^{m}}dt+{{g}_{tt}}dtdt=0$

In this expression the coefficients are summed over the spatial indices m and n. For example in the WWW model, the sums go from 1 to 3 and the direction ${{u}^{1}}$ is the relative preference for “work” versus “take”; the direction ${{u}^{2}}$ is the relative preference for “invest” versus “rent collection”; the direction ${{u}^{3}}$ is the relative intensity with which each player plays the game, work minus wealth.

The idea that communication travels at a finite speed makes sense for any theory about our world, not just physics. But it has consequences when translated into differential geometry. There is a presumed measure of distance or metric between any two points given by the above rule. There is thus a relationship  between the velocities or flows for maximal communication set by the surface in terms of this measure or metric:

${{{g}_{mn}}{{\frac{du}{dt}}^{m}}{{\frac{du}{dt}}^{n}}+2{{g}_{mt}}{{\frac{du}{dt}}^{m}}dt+{{g}_{tt}}=0}$

What this means is that the maximum communication flow must lie on this ellipsoid. All communications must lie inside. This imposes constraints on the possible values of the metric elements, which must vary continuously from their initial values.

Here is a sample surface computed from the WWW model at a point. As one moves from point to point, the shape changes shape and can in fact turn into a hyperboloid. In Mathematica we use Manipulate to study this. A hyperboloid allows communications that occur at zero speed or infinite speed, neither of which are reasonable. Since the coefficients of the ellipsoid, the metric elements, are computed from the differential geometry equations, there are constraints on these metric elements.

We can gain additional insight by considering not just the velocity flow of communication, but the dual normalized $\beta$-flow:

${{\pi }_{m}}=\frac{{{g}_{mn}}\frac{d{{u}^{n}}}{d\tau }+{{g}_{mt}}\frac{dt}{d\tau }}{{{g}_{tn}}\frac{d{{u}^{n}}}{d\tau }+{{g}_{tt}}\frac{dt}{d\tau }}$

It is the dual of the communication flow. We argued above that there was a difference between flow velocities related to the space and its dual. The dual was related to flows that we distinguished by whether the indices were upper or lower. Again, we are making that distinction here, to help us remember that this is not the flow but a linear combination of flow components. The requirement that communication be finite can be written in terms of these components and the inverse of the metric:

${{{g}^{mn}}{{\pi }_{m}}{{\pi }_{n}}+2{{g}^{mt}}{{\pi }_{m}}+{{g}^{tt}}=0}$

This shows the result for an initial point for the WWW model. Again, as we move away from this initial point, we require that the shape stay an ellipsoid. If we get zero or infinite $\beta$ flow, this is not physical. As before actual communication lies inside this ellipsoid. This imposes constraints on the inverse of the metric formed from the metric matrix elements ${{g}_{mn}},{{g}_{mt}},{{g}_{tt}}$. Checking that we maintain the communication ellipsoid and its dual is easy and intuitive.

Summary

We have indicated several concepts that are distinct when we move to more complex geometries:

• Steady-state solutions can be more complex
• limit surfaces that are complex
• Chaotic behaviors
• Visualization challenges
• Dimensionality
• Gauge Invariance
• Conservation laws (symmetries)
• Normal versus orthogonal time
• Finite communication speed
• Communication ellipsoids
• Application to decision geometry
• WWW model
• Provides insight into decision-making
• Applications apply equally well to general relativity
• Speculation: can one extend these ideas to engineering?
• Especially the ideas of harmonics and harmonic series approximation

# Propaganda, Narrative and Geometry

We often hear the word propaganda in the press and in the media. I wonder what kind of force this represents. The competitive force is always the one that comes first to mind. This is because I think of economics and social behavior. Yet propaganda is the ability to set the narrative, it is the ability to change a person’s mind. How does one include such a force in a detailed theory of behavior? Is the narrative a gravitational field that draws all under its sway to move in the same direction, friend and foe alike? Does this field owe its strength to some type of concentration of mass or energy at some strategic location? Might we call it strategic capital? How do I see that such a narrative effectively describes the process? Is this narrative distinct from other cooperative forces as well as any competitive forces? How would I tell?

To approach an answer to these questions that might be acceptable to most reasonable audiences, I anticipate that I must overcome a general fear such audiences have of numbers being used to describe human behaviors. Perhaps it is more general. In many cases we don’t like using numbers even when describing physical events. Take our conversation of weather. Are we more comfortable in reducing weather to a yes/no prediction of rain tomorrow than in understanding the complexities of airflow, humidity and pressure as a function of time? The former is like a person deciding, albeit an unknown weather god. The latter involves taking a stand on understanding. I think we are most comfortable with a yes/no prediction.

What is involved in the more complex understanding? Weather phenomena are not the acts of a capricious god but are the result of a process involving interacting parts spread out in both space and time. This process occurs in a continuum over space and time. What is happening here and now is dictated by what has happened elsewhere in the past. This is true at every level of scale, not only from a macroscopic but a microscopic perspective. A process view is based not just on a qualitative and discrete yes/no understanding but a quantitative description. The quantitative description provides the geometry; without this geometry the process is hidden.

But, you raise the valid objection that social phenomena are inherently uncertain and intrinsically about decisions which are discrete yes/no type outcomes. How can such events possibly be described as part of a continuum? My choice at this instant does not continuously flow from choices made in the past. It is a gamble and I might in fact, if given the chance, make exactly the opposite choice. Yet even in physics we have phenomena at the quantum level that from a certain point of view are uncertain and if viewed in a certain way, appear to be discontinuous. That fact has not prevented us from looking at them from another point of view as being described by a differential geometry in both time and space. Indeed, we suggest that game theory has provided us a perspective that social interactions in fact can be viewed geometrically if we focus on the mixed strategies and how they evolve in time as opposed to the pure strategies that focus on the uncertainties. We don’t focus on the actual decision, but on the mixture of decisions that are possible at any point in time.

As an example, consider that over the last decade, wealth has redistributed itself dramatically (Piketty, 2014). It has not happened discontinuously. It has evolved over time and appears to change continuously across social strata. Some members of the middle class have become wealthy whereas others have become poor. The changes reflect a process, not a capricious set of changes. The process is even more in evidence over long time scales of centuries. These then might be examples that we can focus our attention.

Suppose we analyze the flow of wealth and assume for simplicity it is distributed to two distinct populations. If the populations are valued similarly and if the interaction is zero-sum, we expect each to receive the same payoffs in the sense that what one wins, the other loses. But what if one population believes their payoff, if they win, is much higher than the other population?  We still achieve a zero sum type game if we balance the product of the population and value for each side. If it is agreed that one side is 10 times more valuable, then this works if the other side has 10 times the population. What matters here is the interplay between competition and propaganda, since there may in fact be no objective reason for the valuation other than a possibly enforced agreement. This then is an example from the data supporting the geometric geometry view, since we can see how the flows change as a function of wealth distribution.

So we return to the question of propaganda and narrative, which we view as a question of process. In the above example, the valuation of wealth is felt equally by both populations, yet it may not be factually based. From a theoretical point of view that is really fine. We are not establishing the “truth” of the valuation, but the outcome given that both sides adopt this “truth”. Whether this is a useful exercise is ultimately a question of measurement and quantitative analysis. In a differential geometry theory of decision processes, we look for confirmation in data (behaviors) that highlight the existence of the process. For example, for weather predictions, it is not enough to predict rain versus not rain; rather we must predict in addition behaviors that change continuously with space and time like air flow. Therefore, for social behaviors we must look for flows, as an example, which change continuously with strategic position and time.

We seek to gain understanding from two distinct directions. We look at data to be convinced social behaviors are in fact geometric processes and we look at results of theoretical simulations to be convinced that geometrical processes might explain such data. At some point we hope that these two approaches meet, even though we are not there yet.

# Inequality in Decision Processes

It seems to me that inequities start from the following type of conversation. There is a misperception among people I talk to that I actually wish to contribute to their capital in some way; that they have some enormously interesting endeavor, which I am dying to become a part of. If I contribute in this way their capital grows. There is no reason to suppose that my capital grows as an outgrowth of this exchange. If one person is wildly successful in getting people to contribute to their capital, they will amass a large fortune at the expense of many contributors, each of which lose little perhaps, but also each gains little.

This phenomenon may be related to the ideas in a new book on capital that speaks about the growing inequality of capital (Piketty, 2014), though capital is now used in its proper sense. From a theoretical viewpoint, I think this is related to decision process theory. In that theory, substances tend to accrete into big piles. When these piles become sufficiently large, they may create stable structures. I have not shown that to be the case, but such behaviors occur in physical theories that are similar. If it did occur, we would have a dynamically created code of conduct.

The idea of capitalism, as expressed by some, is that all boats rise together so there is an advantage to rewarding the entrepreneur. The argument against, by others, is that vast inequalities may arise in which one class does not rise at all. Such class members get submerged, to continue the metaphor. A theory should be able to demonstrate both behaviors and give the necessary and sufficient conditions for each behavior to be stable. In the theoretical view, for the first argument, each individual’s payoffs or desires cease to be important; only the payoffs of the dominant player are important. In this way their desires are not satisfied. At some point they must complain. As individuals this may not be sufficient to change the new status quo. Hence they must unite in order to make an impact.

This outcome need not be the only one since a modest redistribution of effort might honor the payoffs of most people while cutting back only slightly the capital going to the dominant player. Alternatively, there may be a world in which there is no dominant player, only a dominant strategy, one that everyone individually desires. I think that that strategy can’t be becoming wealthy (in terms of money) since that means most people will become poor, in terms of money. Of course the theory does not require that outcomes be measured in terms of money, only in terms of things that are valued by the player.

Nevertheless, suppose we buy into the concept that we should act based on our self-interest. This is supported perhaps by the theoretical view that we act based on our internal view of the payoffs. I say perhaps because it is not a given that our internal view of payoffs totally reflects our self-interest. To act purely in our self-interest against others who also purely act in their self-interest suggests a great deal of effort needed on our part to defend ourselves from attack. If we could agree with our fellow antagonists on some ground rules for such fights, we might be able to reduce the time we spend on defense and increase the time we spend gaining those goals we really strive for. Such an agreement on ground rules changes the nature of the interactions by adding a code of conduct. We need to be regulated. The best regulation is one that is minimal; it is best from the standpoint of where we start from, which is as antagonists.

Of course we might start from another position of being communal, in which case we might have somewhat more regulations. However there would be I think a need to limit the regulations so as not to totally smother the members of the community.

This line of thought suggests a zero sum or constant sum code of conduct. If we give something of value we should get something in return of comparable value. This “regulation” prevents outright theft and banditry. If nobody feels they are being taken advantage of, they need not spend their time defending their possessions. I suggest this might be the basis for monetary systems. I note that such regulations necessarily reflect conserved quantities in the theory. Perhaps there are reasons why such conserved quantities come into existence and form relatively stable structures. For the moment, let us just say that we have provided plausible reasons why they might exist.

So how do we proceed? I think we have to imagine what the code of conduct would be in a variety of stable configurations. Each configuration might have several components. To start, I suggest making the distinction of fair exchange between two players $j,k$: this occurs when their distance scales ${{r}_{j}}-{{r}_{k}}$ is inactive. A related distinction is when the strategies ${{s}_{{{i}_{1}}}},\cdots ,{{s}_{{{i}_{n}}}}$ form a coalition: this occurs when their sum is inactive. I think this can be generalized in a nice way to mean that these strategies form a surface. A special case is the constant sum game: the coalition consists of all players and their strategies. Although rather simple, I think these two distinctions revise both chapters 6 and the beginning of chapter 8. There is I think a new taxonomy to be considered. It includes the cases currently done but does suggest new possibilities, such as four strategies all part of a single player.

In suggesting revisions, I have in mind keeping the distinction of the internal (hidden) nature of the players. Their nature is hidden in the sense of hidden variables and constraints that I don’t wish to explicitly take into account. These hidden variables are private and inactive. They result from processes that are not publicly shared and take no direct part in the strategic decision processes. As with constraints in physical processes, we don’t need the details of these processes, just their effect in the energy considerations.

I also keep the notion that there are active variables, which all players agree are part of the decision making process. It may happen that some of these variables or collections of these variables are part of a code of conduct, which could mean there is a coalition. A coalition acts just like an inactive variable in all respects, except that these inactive variables are public; all players are aware of these degrees of freedom and so they form a necessary part of the solution to any decision problem. To make the distinction clear, when there is a doubt, I now call such variables code-inactive. I include the fair exchange and possibly other distinctions as they become known in this set as well. I will call the hidden inactive variables private-inactive.

Another possibility for a code-inactive variable is time. Time is interesting since we normally don’t think of time as a strategy. Yet I have considered time as the hedge player that allows a symmetrized version of the game. It thus has structural significance. As a stretch distinction, it is the learning strategy which brings us into the future. From this formulation, it is clear that time and space strategies are being treated on the same footing, with the exception of the metrical difference with time. Thus for time to be a code-inactive variable, I suggest that learning is occurring at a steady (conserved) rate. The code of conduct thus may consist of different types of code-inactive variables: the learning strategy; coalitions; and fair exchange.

For every inactive variable, there will be a payoff matrix. This is true whether or not the variable is private-inactive or code-inactive. For any solution of the active variable problem, there will be a reduction to an equivalent problem that includes code-inactive variables. This problem will have payoffs for the expanded set of players that includes the code of conduct. Once solved, one can go back to the original problem and compute the payoff matrices for each of the private-inactive players. These are the native players that provide the definition of the private-inactive variables. This has been the approach of the numerical computations carried out so far.

What provides new insight is that these different types of codes of conduct, in different combinations, lead to different solutions; we have expanded the taxonomy of possible behaviors. We have to understand in more detail what decision process we have in mind. For example, the notion of a game is centered on the idea of their being payoffs. Whenever we have payoffs we essentially have a game. For any game we should look for the code of conduct which defines the game; it is the strategy that is orthogonal to all the strategies that are active in the exchange.

We say more by characterizing the types of forces that must be associated with a code of conduct. We take this from the theory. First and foremost is the exchange or payoffs that reflect a competitive push-pull force made famous in game theory. It has analogs in physical theories: Coriolis forces in mechanics, magnetic forces in electromagnetism. The second force is less well known in economics, but we believe is also important: a gradient force whose behavior is set by surfaces of constant potential and whose strength and direction are set by the gradient normal to that surface. When the learning strategy is part of the code of conduct, we expect and see in our computations a gradient force that encourages an accumulation of value, of wealth. Other codes of conduct produce forces that discourage such accumulation. In physical theories, gravity and centripetal forces provide examples respectively. We suggest looking for such forces for each code of conduct.

You might consider it natural that the overall decision is characterized by a coalition of the whole: it is a constant sum game. From this we expect payoffs or exchanges. Indeed it is also natural to consider that economic behaviors in general are characterized by the exchange of money and value. Moreover, economic behaviors are also considered to involve fair exchange. We see a strong reason for agreeing with (Von Neumann & Morgenstern, 1944) that such behaviors should be considered as constant sum and fair exchange games with a learning strategy. This however does not mean every application of decision process theory should be thought of as such; only a subset of applications with this particular set of codes of conduct.

For example, we can have a Robinson Crusoe environment in which the total scale is not constant. This works not only for a single strategy controlled by one person, but for multiple strategies controlled by a single person. Clearly this can be generalized to multiple players where the overall sum is not a constant. These are not constant sum games. We might envision that of such possibilities, there will be those with a fair exchange between every pair of players, which thus identifies a different code of conduct. This is not the game theory of (Von Neumann & Morgenstern, 1944). Yet it is an interesting model that we can study in detail. A particular example to revisit might be the prisoner’s dilemma.

We are now ready to return to the question of inequalities in economic behaviors. Why might one group get rich and another group get poor? In our formulation of game theory, we consider not only games that are constant sum, but games in which there is fair exchange between each of the players including the learning strategy. Thus each player scale  is part of the code of conduct. If we relax the idea of fair exchanges and even perhaps the idea that the learning strategy are inactive, then our results may generate inequalities and thus shed light on how inequalities would occur as a consequence of the theoretical assumptions. We might learn whether such behaviors are stable or not. To date, we have not focused on behaviors associated with inequality in our numerical computations. This is something that should be done.

# Wolfram Technology Conference 2013

The following is based on the talk given at the 2013 Wolfram Technology Conference held October 21-23, 2013. The talk was presented as a Wolfram CDF slide show and is reproduced here along with segments that can be executed using the free Wolfram CDF Reader. Since the slides don’t capture everything that was presented, I have also provided additional commentary.

Introduction

This talk continues an inquiry into the relationships between decision making, determinism and chaos: an initial in-depth exposition can be found on this site. Decision making is one of the most human acts and seems to be the most difficult area to formalize into a theory of behaviors that are causal and deterministic. In fact one might think that the very nature of decision making is one of chance and uncertainty. One issue we think relevant is a lack of appreciation on how causal theories deal with uncertainty. In the view of many, there is insufficient understanding of the sensitivity of the dynamic behavior on the initial conditions. When these issues are taken into account, it becomes easier to see the possibility for causal formal theories of human decision making. See a related and much earlier MIT paper on this subject based on Systems Dynamics, Sterman et al.

One thing that might help our understanding of these issues is to explore how we deal with uncertainty in such mundane activities as measurements, which form the basis of physical theories. So for example, we understand by length, an attribute that characterizes the height, width or depth of something. It is a great accomplishment in understanding to separate this concept from the mechanisms by which we perform the measurement. The mechanisms involve a measuring stick and us as agent; for those of you measuring a basement, you know that multiple measurements yield multiple answers. Yet we are confident that the basement has well defined dimensions. How did we come to this conclusion and how did we learn to separate out the uncertainties associated with us as agent and the intermediary of a measurement stick, from the invariant concept of length? I think we all agree that the separation has been done and we are comfortable with the idea of length.

This distinction in measurements can also be made when making decisions. There are aspects of decisions that are uncertain which mask potential underlying relationships that are causal. As an example I may choose between several strategies according to a fixed set of frequencies that behave according to a deterministic causal theory, yet there can still be an uncertainty in measuring the exact numerical values of these frequencies. More importantly I may not know what choice I will make at some future time even though the frequency of choices is known. There is no requirement that a causal theory of decisions needs to address that issue. Indeed to construct a causal and deterministic theory we may be better off not including such uncertainties in our theory, just as in physics we are better off not including the measurement process. Such processes are usually treated as stochastic with no dynamic structures.

We distinguish such measurement uncertainties from the dynamic uncertainties that result from sensitivity to initial conditions. The latter lead to chaos, showing that deterministic causal theories can have rich dynamic structures with understandable regularities, in contrast to stochastic behaviors. To present these important concepts we proceed as follows:

1. introduction
2. decision making
3. determinism and chaos
4. elasticity
5. determinism and chaos in decision theory
6. fixed frame models–complete solutions
7. conclusions

Decision Making

We note that many decision making behaviors that appear to be uncertain may well have causal elements. For example the behavior of the stock market, which is clearly an example of decision making, appears often to have elements that are uncertain. However if we observe the time series behaviors of the stock market using recurrence plots, we see causal systematics:

Here we have captured the behavior of a particular stock using Wolfram Alpha and created its recurrence plot. The details can be found in the CDF file, which allows the reader to further explore such plots using different stocks.

Determinism and Chaos

The other extreme of uncertainty would be to consider behaviors in the physical world which are assumed to follow well understood laws. A simple example can be taken from physics and the motion of a pendulum. Here non-linear spatial behaviors can also manifest as non-linear time behaviors, including behaviors called chaotic.

Without making a small angle approximation, for low amplitudes the behavior is periodic. There is no structure. However, the force on the pendulum is not proportional to the angle but to the projection of the force along the vertical direction. This introduces a non-linearity. The consequence can be highlighted in a number of ways. One is to provide a oscillating harmonic force on the pendulum. Alternatively, one can start the pendulum with an initial but large velocity. Here is a typical result.

The CDF file allows the reader to experiment with different initial conditions and different values for an external harmonic force. One of the consequences of this deterministic behavior is chaotic behavior. By this we mean that the behavior depends sensitively on the initial conditions. Even though the equations are causal the resultant behavior appears uncertain; in reality the uncertainty is an artifact of the initial conditions, not of an underlying uncertainty in the physical process. The underlying dynamics impose regularities on such chaotic behaviors that have been extensively studied. There is an inter-play between what happens in space and what happens in time.  We believe such interactions also occur in decision making.

Elasticity

We believe that decisions have not only a causal connectivity but a strategic connectivity. We adopt conceptual game theory that decisions are characterized by players or agents that each decide from a list of pure decisions, assigning a frequency of choice to each. From a mathematical perspective each game is decided by these frequencies which represent a point in this space of strategies, spanned by all possible frequency choices. we focus on games which are played multiple times so that we observe both the time evolution and the spatial connectivity. The hypothesis is that decisions change continuously in time and strategy space. We call connectivity in time causal. We call connectivity in strategic space elastic. Spatial elasticity may be related to network connectivity, whose importance is argued by Barabasi.

Game theory, as opposed to our restricted version of conceptual game theory, usually deals with an equilibrium situation, so the idea of causality and elasticity are outside of the scope of the theory. They enter indirectly in the problem statement. We make arguments about the payoffs, which are the key elements of the theory that lead to a specification of the strategic frequencies at equilibrium. what we ordinarily don’t do is assume that the payoffs can change over time. We also don’t consider the possibility that the payoffs and the frequencies are not simply tied by an equilibrium condition.

To explore how this works, we can take any game and see how various decision attributes such as payoffs and frequencies might change over time. We can use the sliders in the CDF model to simulate time. As an example we take the children’s game of rock, paper scissors.

In practical terms, a way to do better in the game is know something about your opponent. If they don’t like playing certain strategies, you are better off adjusting the payoffs to reflect that. As they learn about your behavior, they adjust their behavior causing you to readjust your behavior. You can experiment with the above model to see the effects. In this picture, we assume that the payoffs are in fact given by their equilibrium values. Shortly we remove that restriction.

What we learn from this is that there are degrees of freedom that can change independent of time. We call it space as an abuse of language: we mean strategic space, not physical space. We anticipate that changes in space will influence changes in time. A simple model to illustrate a network model with elasticity is the following:

If you change the spatial variable, the time recurrence pattern changes. It illustrates what we see in the general theory without the corresponding mathematical complexity. The key concept we propose is that decisions are causal and elastic: they depend jointly on time and space variables. Because of the interconnectivity, the resultant behavior is analogous to an elastic medium, which can exhibit waves that propagate, reflect and possibly die out. This goes beyond game theory as well as simple Systems Dynamic models. We now make these ideas more precise.

Determinism and Chaos in Decision Process Theory

Just as in physical models, we have a causal dynamic model in mind. We take these ideas from decision process theory and the behavior along a streamline:

The first equation sets the rate of change of the frequencies that determine strategic choice. On the left is the rate of change of those changes and on the right is the effect on those changes based on the payoffs. When the payoffs are zero, the rate of change on the left is zero. We consider more general changes here. The second and third equations determine the response of the equations to harmonic forces, analogous to those for the pendulum.

Time dependence occurs in the above equations even if the payoffs are independent of time. We see that the frequencies no longer are constants but may oscillate and exhibit other dynamic behaviors just because we are no longer at equilibrium. Shortly we will also consider the possibility that the payoffs also vary in time. But for now assume that they are constant.

We consider a prisoner’s dilemma model in which we can change the various parameters that govern the payoffs. In addition we can change the relative weights of the payoffs for each player, which can introduce non-linear effects independent of the harmonic forces.

When we have small oscillations away from equilibrium, as with the pendulum we see harmonic behaviors for all of the strategies as seen above.

However as we either add more harmonic forces or move away from small oscillations, non-linear behaviors result as shown above. We get chaotic behavior for some choices of the parameters. One can play with the details using the CDF model.

Fixed Frame Models–Complete Solutions

The other source of causal and elastic behavior occurs when the payoffs can also vary. Decision process theory provides for that possibility. There is a soluble model (numerically); the one we choose is an attack-defense model. With a suitable choice of parameters, we again observe non-linear behaviors:

In these figures, there are four possible strategies in an attack-defense war game in which one side defends two targets (one high value one low value) and the other side can attack the two targets. The standard game sets payoffs for the four cases. We no longer assume here that the game is played at equilibrium. We assume only that the sum of strategies (actually preferences in the theory) is conserved, leaving three strategies that can vary. Along a streamline, we view the behaviors of the payoffs as functions of time and three parameters that characterize the strategies along the streamline: x, y and z in the figure. Using sliders we can see how the phase space plot changes. This model is somewhat more complicated and is too large to provide in this talk. So in this case we just provide a couple of illustrative examples.

We see the same type of qualitative behavior as before, without making any assumptions about the form of the harmonic force; it is an outcome of the theory in this case.

Conclusions

In general, chaotic effects require non-linear behaviors. We have observed such behaviors in a decision process theory and expect to see such behaviors in realistic decision processes, including stock market behaviors. These behaviors depend on both causal and elastic effects. The new ingredient is paying attention to the elastic components of decisions: both payoffs and frequencies can vary in time and strategic position.

# Decision making chaos and determinism

This is an inquiry into decision-making and its connection to uncertainty. It is based on the white paper with the same title. Decision making is one of the most human acts and seems to be the most difficult area to formalize into a theory of behaviors that are causal and deterministic. In fact one might think that the very nature of decision-making is one of chance and uncertainty. One issue we think relevant is the general lack of understanding of causal theories and how they deal with uncertainty. Moreover, in our view, there is insufficient appreciation of the sensitivity of the initial conditions that determine future behaviors. When these issues are taken into account, it becomes easier to see the possibility for causal formal theories of human decision-making.

Consider the sensitivity of future behaviors on initial conditions, which has been extensively studied under the general category of chaos and chaos theory. It has been said in the past that chaos represents for humans the way we perceive the world in its un-ordered state. If we had perfect information, so the argument goes, we would have perfect determinism. Slight disturbances on what we think we know, lead to unknown consequences, even in a theory that is strictly causal and deterministic. So “what is deterministic?” It seems unreasonable to believe that because chaos behavior is possible, we must throw out our causal theories. They work very well and explain a host of data. We believe that a more reasonable approach is that we must be more careful about what we claim to learn from these causal theories.

The theories after all reflect our efforts to identify concepts and attributes that don’t change with time or that change with time in an understandable, causal and continuous way. One thing to explore is how we deal with uncertainty in such mundane activities as measurements, which form the basis of all physical theories.

So for example we understand by length, an attribute that characterizes the height, width or length of something. It is a great accomplishment in understanding to separate this concept from the mechanisms by which we perform the measurement. The mechanisms involve a measuring stick and us as an agent; for those of you measuring a basement, you know that multiple measurements yield multiple answers. Yet we are confident that the basement has well-defined dimensions. How did we come to this conclusion and how did we learn to separate out the uncertainties associated with us as agent and the intermediary of a measurement stick from the invariant concept of length? Today, we all agree that the separation has been done and we are comfortable with the idea of length.

Similarly, we are comfortable with the concept of time, despite our dependence of using clocks to make time measurements. From such simplistic considerations, we have adopted over many centuries, physical theories of the behavior of matter that we depend on. For example, we are comfortable with a host of physics problems that relate distances objects travel with time. We believe we understand how a pendulum works because we can predict the behavior starting from a description in which we describe its restoring force as being the source of the acceleration. The behavior is the set of positions of the pendulum over time. We start the pendulum at rest and “drive” it by a harmonic force. We predict from Newton’s theory where the pendulum will be at any future moment. We compare where the pendulum is by measurements against where it is predicted to be and find agreement to a high degree of accuracy.

This model, because of the non-linear behavior of the force, generates unexpected structure. In engineering and business, there are also distinct ways to gain access to a system’s non-linear characteristics. For the pendulum, one can initiate the behavior by varying the initial conditions. Alternatively, one can “drive” the behavior by applying an external force. For example we might impose an external force characterized by a single amplitude and a single frequency. As we vary the frequency and amplitude we stress the non-linear structures of the problem. For sufficiently large amplitudes we generate chaotic structures: we go from a quasi-periodic structure to one that no longer appears periodic. We create behaviors that appear much more erratic and lack the periodic behaviors seen with smaller driver amplitudes. The idea is that these properties may in fact carry over into the realm of decision-making.

We expect that decision-making has attributes that involve imperfect information as well as perfect information. The challenge is to identify each of these, separating out those attributes that have a predictable behavior from those attributes that are inherently uncertain. We adopt game theory (Von Neumann & Morgenstern, 1944) in which an intrinsic view of decisions is a productive starting point where we separate out the pure strategies as things of permanent interest. A pure strategy is the complete set of moves one would carry out in a decision process taking into account the moves of all of the other players or agents in the process along with any physical or chance effects that might occur. It is a complete accounting of what you would do, a complete plan given every conceivable condition. It is furthermore assumed that you can approximate this complete list with a relatively small list of pure strategies.

Just because there are pure strategies, there is no reason to believe that one of these pure strategies is the right choice to make. If you are in a competitive situation, there may be a downside to your competitor knowing that you will pick one of these strategies. The solution is to “hide” your choice by picking the pure strategies with a specific frequency. The theory determines for you what these frequencies are without informing your opponent which choice you actually will make on any given play.

Thus your decision choice is a specific frequency choice and in that sense represents the measurement of “length” despite the fact that in a real decision process, like a real measurement process, there are lots of uncertainties. You would like to determine the frequency choices the other players make and they want to understand your choices. We emphasize that knowing these frequency choices is not the same thing as knowing what you will actually do on a given play. We take this knowledge in the same way we take the knowledge about the size of our basement. We know how to get a good approximate set of measurements. We know that out basement has a size. For each measurement process we don’t know what size we will actually get.

We extend game theory to decision process theory (Thomas G. H., Geometry, Language and Strategy, 2006) in which the strategy frequencies vary with time. This theory predicts future frequency values based on a given set of initial conditions. The theory is causal in this sense, without actually dictating what a player will actually do at any given moment. The basis of the theory has some similarities to physics and even more underlying similarities to mathematical models of physical processes. Just as in physics, there can be external forces that dictate how the rates of change of frequencies change in time. There will be stationary situations in which these rates of change don’t change, in which the forces generating such changes are zero. We equate that scenario with the whole literature of game theory and its consideration of static games: the frequencies are fixed numbers. Static games provide an important foundation for our approach, though our results diverge once dynamic effects are included.

A second scenario is one in which the fields that generate the forces are static, but the flows, the rates of change of the frequencies, are dynamic. The flows may depend on what other players are doing, and so we can distinguish a special subset of flows that are stationary: at a specific “location”, the flow doesn’t change. However, if you follow the streamline of the flow you will follow a path that changes in time. You might think of a weather pattern that is stationary, in which the wind at any position is constant in speed and direction. If you follow a path along the wind by adding smoke, you will see that the smoke follows a streamline that moves with time. These considerations are really rather similar to the pendulum problem.

Based on these considerations and verifying our ideas from a variety of numerical examples, we argue that behaviors from decision process theory are deterministic, yet represent the uncertainty of choices based on frequency. We can separate out from the decision process the uncertain aspect of the decision, whose future behavior is unknown: we don’t know which pure strategy will be chosen. Thus we identify that aspect of the decision process that deprives us of perfect information. We also identify those aspects of decisions processes that might evolve continuously in time and can be determined in a causal manner. These are the numerical frequencies of choice that form the basis of the choice, but don’t actually determine the specific choice at any given time. Our theory is then about the frequencies and not about the choices.

This is not the end of the story. Just because we have a theory that determines future behavior based on knowledge of past behavior, we are not justified in assuming that the predictions will be insensitive to our starting point. Non-chaotic behavior assumes that the future behavior is not sensitive to small changes in the starting point. This often follows from theories that are linear in nature. Chaotic behavior by contrast expects small behaviors to generate large behavioral differences, even if that behavior ultimately stays bounded. Over time however, we expect to see significant deviations. Some of the non-linear behavior is a consequence that preferences can’t grow without limits. We postulate that concept here, but in fact do see evidence for that behavior in the full theory.

We expect that chaotic behaviors can be generated from within, without recourse to external “drivers’, if there are suitable parameters that can be varied. For the pendulum, the suitable variable would be the initial speed. The initial flows and payoffs are suitable variables for decision processes. The chaotic behavior is a result of the non-linear nature of the forces and can be made visible with a “driver” representing external periodic forces. It is then a matter of whether the amplitudes and frequencies excite the underlying structures. In fact from both, seemingly benign behaviors as a steady state need not indicate the lack of interesting structures. The key is how to excite these structures into existence.

# Dynamic Game: Rock Scissors Paper

The rock-scissors-paper game is simple, with one optimal mixed strategy for each player that consists of picking one of the three choices with equal frequency. The game is fair, favoring neither player. Nevertheless the game has a large following with many claims of how to win at the game.

Decision process theory has something to say here that sheds new light. The perspective is that the payoffs are not fixed for each player; there are negotiation forces that influence the choices to be made not only between players but for the same player. There are also valuation forces pushing the choices along individual strategy directions that need not be “classical game theory”.

Decision process theory predicts how these forces play out. Without going into the details of the computations, we gain insight into such such forces by analyzing the qualitative aspects of this simple game in the context of the theory. For this we use a simple “dashboard” below that allow us to change the payoffs and observe the equilibrium flow direction so implied. We hope that a full theoretical treatment would yield similar insights, while correcting any misperceptions that result from this rather simplified approach.

Negotiation fields: in its simplest form, the game is played based on the idea that for each player, rock breaks (wins over) scissors, scissors cuts (wins over) paper and paper covers (wins over) rock. This is put into game theory format by saying that for each player, they see a unit payoff for each and the negative amount if they lose. If both players play the same thing, the payoff is zero. In game theory, the matrix is called the payoff matrix and appears as the 3-by-3 block matrix in the lower left of the dashboard, one up from the bottom row. The game can be made symmetric by creating the fiction that each person plays both sides: hence the payoff matrix (negative transpose) also appears at the top right, one to the left of the last row. The last row (and last column) represents a “hedge strategy” of the fictional game; it insures that the results of the fictional game exactly match the original payoff. In decision process theory, we take the fictional game as a non-fictional representation of the game with the “hedge” direction re-interpreted as time.

We retain the interpretation of the payoff, which we now call the negotiation field. We make this distinction because the field can change in time and can be thought of as a negotiation between the players. As in game theory, there is a distinct negotiation field matrix for each player, so that the time dependence reflects that player’s internal view of their own preferences and their opponent’s preferences. So for example if you (player p1) believe there is an increase in utility for “rock breaks scissors” from 1 to 2, your ideal strategies change from equal to {0.25, 0.333333, 0.416667} and your opponent (player p2) strategies change from equal to {0.333333, 0.25, 0.416667}. Your opponent should pick up on this preference and play paper more often. You on the other hand are forced to play rock less often because of the competitive nature of game: you play defensively.

Collective Bias Value: game theory makes no strategic distinction between games if their payoffs differ by an overall constant. We call this the collective bias value. You can verify that changing the collective bias value on the dashboard for the baseline case does not change the ideal strategies. The collective bias value for each player reflects a certain way of thinking about decision processes. In most cases, we make decisions assuming the utility of the decision is in our favor. We thus unconsciously add a collective bias value. So for example if you add a value of 0.1 to the payoffs, each payoff for you is positive and each payoff for your competitor is negative. On the dashboard, the collective bias value has been added as a convenient slide, saving you the trouble of moving each of the payoffs up by the same amount.

Valuation fields:  In game theory, when defining the fictional game, when the collective bias has zero (game) value, the last row and the bottom column are zero. This last column (and last row) has values in general proportional to the game value. In decision process theory, we extend this concept. Because we label this column (and corresponding last row) time, the payoffs in this column we distinguish from the negotiation fields and call them the valuation fields. You can use the dashboard to see the ideal strategy in the case that the collective bias value is 0.1 and the valuation fields are 0.1 for player 2 and -0.1 for player 1. The players should have equal and opposite valuation fields because payoffs by their nature are competitive and implicitly zero sum.

We go further however. We see the valuation field for each strategy as generating a force along that strategy, since a player need not value each strategy the same. Set the collective bias value to 0.1, and all the valuations to 0.1. Now change “p2 values rock” from 0.1 to 0.2. This makes no change to your opponent’s (p2) ideal strategies, but does change your (p1) strategies from equal to {0.333333, 0.308333, 0.358333}. This says that you have learned of your opponent’s leaning towards picking rock. It is a bias on your opponent’s part. If you pick paper you will clearly gain an advantage based on your knowledge of your opponent’s bias.

On the other hand, let us say that you (p1) decide to place more valuation on rock and your opponent does not. You change your “p1 values rock” from 0.1 to 0.2. Your ideal strategy does not change this time but your opponent’s does: {0.333333, 0.358333, 0.308333}. It is not symmetric however. The reason is that your valuation of rock moves you towards the origin, toward preferring rock less. This will be compensated by your negotiation forces. This means your opponent can take advantage of your bias by picking paper less, since there is less concern you will pick that. This is compensated by choosing scissors more. Note that in these last two examples, the total rate of making choices is larger for the other player, the player that does not increase their choice.

Inertia: a related attribute of valuation fields and the collective bias value can be seen on the dashboard choice of the “normalized” ideal strategy. The normalization is to pick the time component to be unity. This allows us to make comparisons for different models. Since we argue that increasing the collective bias value should increase the valuation fields, by holding the fields constant we implicitly introduce a new parameter we call the inertia: the ratio of the collective bias value and inertia are thus being held constant. A consequence of this is that the normalized ideal strategy will have “flows” that get smaller as we raise the collective bias value. This substantiates our view that the strategies are the rate of change of preferences in response to the valuation and negotiation forces. A high inertia corresponds to very slow movement.

Internal negotiation fields: A distinct difference between our approach here and game theory is the possibility of capturing internal conflicts, factions and self-payoffs. Suppose our opponent (p2) decides that she is in cyclic conflict over rock and paper in a way which is totally internal. On the dashboard move the slider for this possibility from 0 to 0.2. She has no reason to suppose in this case that we have made any changes in our utilities, so her ideal strategy stays the same. However we may profit from this internal cyclic conflict as seen by our ideal strategy: {0.355556, 0.288889, 0.355556}. We decrease our frequency for scissors to capitalize on the area of conflict.

Ideal strategies and forces: Our approach here has been to identify mechanisms that generate change. We call these forces. So for example, the negotiation field component {i,j} generates a force on our (p1) choices along the “i” direction as a negotiation between our choice and our opponent’s choice along the “j” direction. This is rather different from the valuation force for the component “i” that generates a force along the same direction independent of what our opponent does. When these two types of forces exactly cancel for every strategy, there are no forces moving either our strategies or our opponent’s strategies in either direction, defining an equilibrium position we call the ideal strategy.

# Zero-sum games generalized

Arguably, one starting point of game theory is the idea of a zero-sum game. Ordinary games such as chess have a winner and a loser. Games such as poker, exchange money from the losers to the winner. Of course game theory extends the concept of zero-sum games dramatically to situations that are far more general than recreational games. Still, the idea of a zero-sum game separates the class of all games into two distinct camps: those which are essentially competitive and zero-sum and those that have elements of cooperation in which the total value of the game may grow or diminish. In this regard, constant-sum games are considered to be essentially like the zero-sum games except one can imagine that the two sides agree on some tribute that is exchanged, which lies outside the rules of the game.

I argue that decision process theory has much in common with the theory of games so how does a zero-sum game manifest? Or more generally, how does a constant-sum game manifest? In some ways the question is difficult because game theory has a very specific notion of utility, which has been modified in decision process theory. There are several concepts that might play the role of value, from components of the payoff matrix to the concept of preferences that underlie strategy choices. What is clear however about the concept of a zero-sum game is that something must be conserved. There must be some attribute of the decision process that is unchanged; some attribute whose value has no impact on the strategic choices being made. The most appealing approach to me is to associate such a conserved quantity with a collective strategy that is inactive. Such a collective strategy represents a collective preference whose value plays no role in the strategic outcomes. One such collective strategy would be, in some frame of reference, the sum of the strategies of all the players. Interestingly enough, the sum of such strategies in game theory is also a constant.

The concept of a collective strategy being inactive is not quite the same thing however as saying that such a strategy is totally invisible. We have to examine in more detail what is meant by a strategy playing no strategic role. In game theory, this means that the fixed point, the equilibrium point, does not depend on the value of that strategy. In decision process theory, this means that there is a conserved momentum associated with that collective strategy which is conserved. The conservation law is set by the initial conditions. If there is a great deal of initial inertia for example, the decision system will behave quite differently than if there is very little initial inertia. These differences are not seen in game theory: zero sum games are characterized solely by their payoffs, so that two games with the same payoffs should behave identically. I think this makes sense in decision process theory as well when you are at a fixed point. However the behavior around that fixed point should depend on the conserved quantities. Metaphorically, if the system circles the fixed point you should perceive differences depending on the amount of angular momentum the system has, even though that angular momentum is conserved.

How does one identify these conserved quantities? In game theory one identifies games for example in which the payoffs sum to zero or to a constant. These systems are competitive in that what one person wins the other players lose in such a way that exactly compensates the win. Thus competition is one way of identifying situations in which there is a conserved collective strategy. I have argued in decision process theory that a conservation law occurs in societal situations in which there is an established code of conduct to which all players adhere. Let us call such a code of conduct an effective code of conduct. A competitive situation could, by a slight stretch, also be considered an effective code of conduct. All players agree that the rules of the game are that whatever one person gains, the remaining players must provide compensation. I suggest that the conserved quantity must depend in some way on the amount that is exchanged. The more value exchanged, the more risk and hence the more interesting the dynamics required to hold such a process near some equilibrium value. Indeed, this may not be possible and the process may be unstable and fly apart, still maintaining these conservation rules.

I don’t have a strong argument from game theory about what these conserved quantities should be. However from decision process theory, I do have a strong argument: there is a unique quantity identified whenever one has identified an inactive strategy. In physical theories this is the momentum, which in Newtonian theories is the product of the mass and the velocity along that direction. In other words, the inertial mass plays a role as well as the velocity. In more general geometries such as we consider for decision process theory, the momentum still has the same qualitative property. I argue that the momentum is the “value” that we should identify with the game theory notion of a constant-sum game. I go from a direction that provides an effective code of conduct to the identification of a value whose sum is conserved; whose sum is constant independent of any and all dynamic interactions. In this way we have generalized what it means for a process to be zero-sum.

The zero-sum value is really that the sum of the time components of the payoff, the “electric field components,” is zero. This is a direct consequence of the decision effort scale being inactive. Consider a different scenario in which all of the relative player preferences are inactive; only the player efforts are active. In this case, for each player, the time components of the electric fields would be equal. Moreover, we could require that there be no closed loop current flows in the player subspace, which would imply no “magnetic fields” orthogonal to the player subspace. This is the requirement of no self-payoffs or factions. Such a model has much in common with the voting game in game theory.