Deep Work and AI

There's an interesting parallel that I found in the mechanism that Cal Newport has proposed to quantify 'shallow work' in his book 'Deep Work' and in the mechanism that Andrew Ng has proposed to gauge what current LLMs (think ChatGPT, Google's Gemini or Microsoft Copilot) can or cannot do in DeepLearning.AI's course 'Generative AI for Everyone.'

To understand the parallel, let's first establish some basic terms. Skip this and the following two paragraphs if you know what shallow and deep work and LLMS are. Cal Newport, a nonfiction author and associate professor of computer science who has written multiple best sellers, especially in the productivity category, defines Deep Work as professional activities that must be performed in a distraction-free state with lots of concentration. These activities create a lot of value for the world. Examples include writing a research paper, writing the code or developing the product spec for Relfeed, strategic planning, or learning a complex skill.

Conversely, Cal defines shallow work as tasks that are not as cognitively demanding, are more logistical in style, and can be (and often) performed with distractions around. Examples include answering emails, attending most meetings, doing data entry, or performing routine administrative duties.[1] Whenever I talk about shallow tasks in this essay, I exclude tasks involving physical labor.

LLMs are large language models, the most popular of which is OpenAI's GPT-3.5. They are advanced machine-learning models that can understand and generate human-like text and have been responsible for the current uptick in the capabilities and popularity of AI.

Now, to the exciting bits. In his book Deep Work, Cal Newport talks about quantifying the depth of your activities to categorize them as either deep or shallow work. For that, he suggests asking oneself how long it would take for a smart person who has recently graduated from college, with no specialized training in the relevant field, to get trained to do the task. The task is deep if it takes a lot of time (say more than a few months). It is a shallow task if it takes less time than that. Tasks can also be more or less shallow based on such comparisons.

In his course, Andrew Ng, a leading AI researcher and the cofounder of Coursera, DeepLearning.AI, and Google Brain, proposes a mental framework for judging what current LLMs (as of late 2023) can do. He proposes that if a fresh college grad, following only the instructions in the prompts, completes a task, then the LLM can also do it.

As you might have noticed, there's some similarity in both these questions. This essentially means that LLMs, at least the popular non-specialized chatbots, are doing a lot of extremely shallow tasks purely by themselves. With some finetuning, Generative AI models can handle slightly less shallow tasks. And with more finetuning or pretraining, LLMs assist specialists in deep tasks. Supervision of specialists is still recommended with most LLM applications. As AI progresses, the capability will slowly go from that of a fresh college grad to that of a specialist. It is hard to predict when it will match that, but it is on that trajectory. The former framework of categorizing shallow work, however, will remain unchanged. From this, we can derive three major insights.

The first insight is that shallow work will be the first to be wholly automated as AI progresses. The shallower the tasks, the earlier and better they will get automated. Therefore, the question mentioned above that helps you gauge which kind of work is shallow or not is also the question that you should ask yourself when trying to judge which jobs AI will replace. I don't believe there will be a mass loss of jobs immediately. The first step will always be AI augmenting humans and increasing their productivity because AI right now is not replacing jobs but partially automating or speeding up tasks. But eventually, the number of people required in particular roles will decrease.

In contrast, that number will increase in other fields, just like with the advent of every major general-purpose technology. The safe jobs are the ones that require deep work (or physical labor – for now), and those should be the ones to see the most increase. Such jobs would also require the AI to be supervised for a long time. No matter how much an LLM can help me write code or the product spec, I must review it extensively and find gaps. And it'll be like that for a long time. Similarly, the work of a researcher or a doctor is hard to replace completely. It will, however, be augmented, allowing them to do more, which brings us to the second insight.

The second insight is that AI is great for people who engage and wish to engage more and more with deep work. With AI automating more and more shallow tasks, you're more free to work on the important and value-generating stuff. Not only do you get more time to work on the deeper tasks, but the decrease in shallow tasks means fewer disturbances throughout the day or week. So, if you are a deep worker and are not embracing AI already, go ahead and do that! You've much more to gain from it than many.

The third insight is that in a society where more and more shallow work is automated, the skill of working deeply becomes way more important than ever before. As Cal Newport points out in his book, the skill has not only been rare but has become even rarer because of the nature of work that is expected of employees and the constant distractions that we live with due to smartphones and the internet. I recommend that book or Cal's blog to help you understand the importance of deep work. While I don't believe all shallow workers will be out of a job, being able to work deeply would be an even more significant differentiator given that the quality of shallow work produced will even out, with everyone gradually using AI to produce it. Mastering the prompts and tools is not that hard. Hence, the difference in quality would come from the tools and not the workers.[2] And thus, the value of deep work will keep on increasing.

I highly recommend the book and the course mentioned in the blog to the readers.



[1] I and I believe Cal, do not mean to offend anyone whose majority of tasks include Shallow Work; these are just the definitions. Both of these tasks are necessary for the world to function. Everyone, including us, has to do a lot of shallow tasks as well. And I believe, in some ways, doing these, day in and day out, is harder.

[2] As AI companies compete, the differences will become minimal. The inferior products will die. And the same employee can use multiple tools for multiple tasks. Most of us already do that. We already use Notion, Quip, MS Excel, and Google Sheets, depending on what we want to accomplish, don't we?