Thursday 24 March 2016

Procrastination Explained Through Graph Theory





Much of economics believes that individuals make rational decisions, but in reality we make decisions that are inconsistent over time. These inconsistent decisions are widespread in our daily life. We tend to make plans for completing tasks but then we end up procrastinating. We put things off until the last minute and then realize that it is impossible to complete within the deadline. So, we either do sub-optimal work or abandon the task entirely.

It is possible to reason about procrastination using network theory concepts and this can help in understanding the problem- specifically why it perpetuates and what can be done to avoid it. 

Let’s consider an example attributed to Nobel Laureate George Akerlof to illustrate the idea. Suppose, an agent needs to ship a package sometime in the next n days. Then he can ship the package on day t, where t lies in 1:n. Suppose the cost of shipping the package is c and each day the package is not received at the destination it incurs a cost x due to the item not being used. If the time taken to ship is h days then the total cost of shipping the item on day t is given by: (t+h)*x + c. The optimal strategy here would be to ship the package on the first day itself, to incur minimum cost but this behavior is not usually found in real-life. The agent procrastinates and we can show this postponement would continue until the last day n.

We can model this using the notion of present bias, where the costs associated with the current time seem magnified to the agent by a factor of b, b>1. Thus the agent perceives the cost of shipping the package on day t as bc + (t+h)*x and believes the cost of sending it on day (t+1) to be equal to c + (t+1+h)*x. The difference between the two costs is (b-1)*c –x so if (b-1)*c >x then the agent will tend to procrastinate believing that cost of sending the item on the next day to be lower. This behavior will continue upto day n, when he cannot wait anymore and must send the package immediately. The agent then incurs a total cost of c+(n+h)*x which is much higher than the actual possible shortest path of cost c+h*x. Thus, we note that despite the agent’s initial objective of traversing the shortest cost path, he ended up choosing a sub-optimal route and paying an extra cost.     

In order to further develop this model, let us consider quasi-hyperbolic discounting. We assume that for any payoff c incurred at time t in the future (t>=1), the present value of the payoff is given by c*βδt   ,where β, δ <=1.  Here, β is the present-bias of the agent and parameter δ is the discounting factor. Let us assume in our case that the agent is naïve to his own time inconsistencies and δ =1 (ie payoffs don’t exhibit diminishing values when we compute present value, but by a constant β which we have defined as the present bias.)The example illustrated above follows these assumptions with b=1/ β. Thus the following two statements are equivalent-

Cost c perceived by the agent at current time t is magnified by b, ie it is equal to b*c  çè Cost c perceived by agent at time (t+1) is β*c , where β<=1 and b=1/β.

We can model this goal-seek problem as a task graph where the sequence of steps the agent needs to take to complete the task are nodes connected via directed paths. The agent currently at node s, wishes to achieve the final goal state t. Each node in the path connecting s-t thus represents an intermediate state or sub-task while the edges represent transitions between steps. The edge weights associated with these edges are the costs incurred from completing a sub-task and reaching a new state.

Consider the graph shown in figure. Here the agent wishes to go from initial state s to final state t.
The agent iteratively computes the shortest path from his current position to node t using a present bias approach discussed above. Thus, the shortest path to the end goal depends on the current state in which the agent is in. Let bias β=1/2 then the length of paths computed from s to t are as follows-


Path
Perceived cost of path from node s
s-a-b-t
16+2*β+2*β=18
s-c-d-t
8+8*β+8*β=16
s-c-e-t
8+2*β+16*β=17

Thus the agent traverses the edge s-c and reaches node c as the path s-c-d-t is the shortest path. On reaching c, the agent re-calculates the shortest path distances to t.
Path
Perceived cost of path from node c
c-d-t
8+8*β=12
c-e-t
2+16*β=10

More formally, if ei(P) denotes edge i of path P connecting s-t such that cost of traversing the edge is c(ei), the agent’s decision is to choose one among all such paths such that c(e1(P)) + β*sum(c(ei(P)) for all i>1 for all paths P that run from v to t.

Now the agent traverses the edge (c-e) as it lies on the shortest path to t from c. From e, the agent can take only path to t which has a cost 16. Thus the total cost of traversing from s-c-e-t is 26 even though the actual shortest path cost is 20. Thus we can say that the agent exhibits time-inconsistent behaviour as a result of which he makes sub-optimal decisions. 

Reasoning with tasks with rewards

So far we have considered a fixed-goal model in which the agent’s objective is to choose the shortest cost path to the target. Let us now consider a reward model in which the agent earns a reward r on reaching the target node in lieu of the cost incurred for traversing the path from s-t. The reward will be discounted by bias β relative to the cost incurred in the current time. In this model, the agent will pursue the shortest cost paths as long as the cost does not exceed the perceived value of the reward by the user. For example, if we consider a 3-node path comprising of nodes s, v and t, with edge (s, v) of cost 1 and edge (v, t) of cost 4. If β=1/2 and a reward of 7 is awarded at node t, then the agent will traverse the edge (s,v) because the total cost of path is: 1 +4*β= 3< 7*β. However, once the agent reaches v, the cost of reaching t now becomes 4> 7*β and thus the agent abandons the task midway.
We can further extend this discussion and seek answer to the question- Can we increase the agent’s efficiency in reaching the goal by changing the payoffs or deleting some nodes?

Dealing with choice reduction

It can be shown that by suitable deletion of nodes, it is possible to prevent the agent from quitting the task mid-way. We illustrate this using an example-
Suppose a student is taking a 3-week short course in which the student has to submit 2 projects by the end of the course. It is up to the student when she decides to complete these projects but they have to be submitted by the deadline of 3-weeks. In any week the student is not doing any project the cost incurred is 1 (this could be the cost incurred from attending classes), the cost of doing 1 project in a week is 4 units while the cost of doing 2 projects in a week is 9 units. If the student completes the course the reward is 16 units. Let β=1/2.  This scenario can be represented using graphs as below-

Node vij corresponds to a state in which the student has completed j projects after the ith week of the course. Thus, s =v00 and t=v32 in the example above. The student path looks something like this- from is– v10-v20 intending to complete both the projects in the last week but when she reaches v20, the cost of edge (v20, t) is 9 > r*β=16/2=8. Thus, the student feels that it is too costly to complete both projects and drops the course.




How can the instructor prevent such a situation from happening?

Suppose the instructor imposes an intermediate deadline such that node v20 no longer exists ie the student has to complete atleast 1 project by the end of week 1, then the shortest path problem solution changes. Now at node s, the student decides to traverse the path s-v10-v21-t and upon reaching v10 and v21 continues traversing this path until finally reaching t. Thus, by reducing the space spanned by the graph/ by choice reduction it is possible to increase the efficiency of the agent.

Conclusions

So far we have seen that it is possible to reason more about behaviors like procrastination, goal abandonment and incentivisation etc. using graph theory.

We have only looked at selective aspects of the problem, this discussion can be extended to learn more about issues like- how to prevent goal abandonment for all agents each having a different beta value. Similarly, the discussion on the choice-reduction problem can be extended to determine the most optimal placement of rewards so that the multiple-agents reach the goal while claiming as little total reward as possible. While we can use concepts of network theory to understand our response to many such problems, at the very least we can hope that this type of analysis of everyday problems we grapple with can help us become aware of our own inconsistencies and pave way for more-effective decision making.

References-

[1]Time-Inconsistent Planning: A Computational Problem in Behavioral Economics, Jon Kleinberg, Sigal Oren, 15th ACM Conference on Economics and Computation

[2]Procrastination and obedience, George Akerlof, American Economic Review: Papers and Proceedings, May 1991

1 comment:

  1. The most intriguing one! Need to verify it in real life

    ReplyDelete