Ordinateur
Tragedy of the Commons: Instant Messaging
April 04, 2025There may not be such thing as a bad question, but there are expensive ones. Instant messaging platforms like Slack suffer from the tragedy of the commons. Sending a message takes minimal effort and the cost of responding is hidden from the sender. People slip into wasteful patterns of communication that can be easily avoided. This is worst in public support channels where people ask redundant questions without bothering to scroll up or use the search feature. Asking questions the right way helps optimize everyone’s time.
Imagine Billy, an entry-level engineer who is trying to test his code before submitting a pull request. The code does pre-processing of data before executing a model with Pytorch, so it is difficult to setup on a standard machine.
Act 1
<-#pytorch-support-channel->
[Billy - Tuesday, 9:46 AM]:
Has anyone run a torch build script before?
[Mandy - Wednesday, 3:24 PM]:
I've setup all of the GPU infrastructure for the org, what are you trying to do?
Here, Billy posts a very generic question into a public channel. He provides no context on his goal so nobody in the channel has an incentive to respond. This is commonly known as ‘Don’t ask to ask, just ask’. Billy has wasted time for anyone reading this Slack channel and has waited over a day for a response. Even if/when someone responds, he is no closer fulfilling his original goal of testing the code.
Act 2
<-Direct Messages->
[Billy - Tuesday, 10:29 AM]:
Hi
[Mandy - Tuesday, 11:01 AM]:
Hi.
[Billy - Tuesday, 11:03 AM]:
I'm having trouble running my torch build script
[Mandy - Tuesday, 12:27 PM]:
What error are you seeing?
Instead of aimlessly posting in a public channel, Billy sees that Mandy has answered a lot of people’s questions and messages her privately. But rather than asking a direct question, Billy takes one step forward and two steps back. This is known as ‘No Hello’. Sending a greeting may seem polite but it wastes cycles.
Act 3
<-Direct Messages->
[Billy - Tuesday, 10:29 AM]:
Hi Mandy, do you know how to add GPU support to our virtual machines?
I'm seeing this error: `No CUDA capable device is detected`
[Mandy - Tuesday, 10:31 AM]:
Our virtual machines don't support that, what do you need a GPU for?
So close. Billy has improved his approach and asks a direct question, with context provided. However, he does not ask the right question. Adding GPU support to the virtual machines is an unnecessary side quest on the path to testing his code, also known as an XY problem.
Act 4
<-#pytorch-support-channel->
[Billy - Tuesday, 10:29 AM]:
Hi, I updated the preprocessing code in this repo <link>
and now I'm trying to test it by running the torch script on my virtual machine.
When I run it I see this error: `No CUDA capable device is detected`. Does
anyone know how to set up GPU support on our virtual machines?
[Mandy - Tuesday, 10:31 AM]:
Our virtual machines don't support that, but there should
be a script called `run-test.sh` in the testing folder. If you use that, it
will bundle your code, deploy it to the shared GPU pool, and run the tests
This time, Billy provides appropriate context on what he is actually trying to accomplish, while still giving detail on the current problem he is facing. This allows Mandy to immediately push him from his side quest and onto the right path.
This is Part 2 of a series where I document mental models about communication.