future of work

What Work Ought to Be: Lessons from Jennifer Tosti-Kharas and Christopher Wong Michaelson

In this episode of Meaningful Work Matters, Andrew speaks with Jennifer Tosti-Kharas and Christopher Wong Michaelson, co-authors of Is Your Work Worth It? and The Meaning and Purpose of Work. Jennifer is a management professor at Babson College and an organizational psychologist, while Christopher is a philosopher and professor of business ethics at the University of St. Thomas and NYU Stern.

Together, they bring complementary perspectives to one of the most pressing questions of our time: how do we understand meaningful work, both as individuals and as a society? Their conversation explores why “calling” is a double-edged sword, how 9/11 shaped their research trajectories, and what leaders and organizations must grapple with in a world of “bullshit jobs” and artificial intelligence.

Meaning as Subjective and Objective

Tosti-Kharas approaches meaningful work through psychology, where meaning lives in the mind of the person doing the work. Two people in the same role may experience their jobs entirely differently — one may see it as “just a paycheck” while another feels it is a calling.

Wong Michaelson complements this with a philosophical view. He argues that while work should feel meaningful to us and be valued by society, it must also be meaningful in itself. In other words, people can be wrong about whether their work is meaningful. The example he often cites: the 9/11 terrorists believed their work was meaningful, but their actions were objectively harmful.

Both perspectives highlight a tension that leaders and organizations cannot ignore: meaningful work is both personal and ethical.

The Legacy of 9/11

Tosti-Kharas and Wong Michaelson were management consultants in New York City during the 9/11 attacks. Living through that moment changed not only the course of their lives but also the direction of their research.

They noticed how victims were remembered through their work, and how this collective memory gave work meaning far beyond paychecks or promotions. This became the seed of their first collaborative research project and continues to shape their inquiry into how we ascribe value to work as individuals and as a society.

For many, 9/11 revealed that work is also a way we connect with others, a lens through which we are remembered, and a reflection of what we collectively value.

The Double-Edged Sword of Calling

Tosti-Kharas’s research, echoing scholars like Amy Wrzesniewski, shows that seeing work as a “calling” can be powerful, but it can also be risky. Calling can inspire dedication, resilience, and satisfaction. It can also leave people vulnerable to burnout, exploitation, and strained relationships.

In some workplaces, those who sacrifice everything for their jobs are celebrated, while boundaries and balance are overlooked. As Tosti-Kharas notes, “calling” is not available to everyone, and it should not be positioned as the only path to a meaningful life.

For leaders, this means acknowledging both the benefits and the dangers of purpose-driven work.

What Organizations Owe Their People

Wong Michaelson’s perspective pushes leaders to ask: What obligations do organizations have when it comes to meaningful work?

It is not enough to craft clever purpose statements or rely on employees’ intrinsic motivation. Organizations must create conditions that respect dignity, promote fairness, and avoid leaning too heavily on employees’ sense of purpose.

Tosti-Khara adds that this responsibility extends beyond knowledge workers.

Nearly half of the U.S. workforce work in jobs that pay less than $20,000 a year. For people in precarious or low-wage jobs, conversations about calling can feel irrelevant or even offensive. Here, meaningfulness may come not from the job itself but from what it enables outside of work — supporting family, giving back to community, or creating stability.

Bullshit Jobs, AI, and the Future

The dialogue also takes on the phenomenon of “bullshit jobs,” as described by David Graeber. Too many people spend their careers doing work that even they secretly believe is pointless. This is damaging to collective well-being, and inefficient.

Looking forward, generative AI raises new questions.

Will it automate the tasks we find meaningless and leave space for work that is truly fulfilling? Or will it strip away jobs that people find essential to their identity? Christopher remains optimistic that uniquely human qualities like creativity and care will continue to set us apart.

But both agree that society must rethink how we define and distribute meaningful work in an era of rapid technological change.

Key Takeaways

  • Meaning is both personal and ethical. Psychology reminds us that people experience meaning differently, while philosophy reminds us that work should serve a greater good. Together, these lenses expand how we think about what makes work truly matter.

  • A calling can inspire and harm. Seeing work as a calling can fuel passion and commitment, but research shows it also makes people more vulnerable to burnout, exploitation, and blurred boundaries between work and life.

  • Organizations shape the conditions for meaning. Beyond slogans or purpose statements, leaders have a responsibility to design jobs and workplaces that respect human dignity, create fairness, and avoid over-relying on employees’ sense of purpose.

  • The future of work raises new questions. From “bullshit jobs” to the rise of AI, work will continue to evolve in ways that affect how people find and sustain meaning. Being creative, caring, and intentional about how we use these tools will be critical.

Final Thoughts

Tosti-Kharas and Wong Michaelson remind us that meaningful work is never just an individual question. It is also about how we remember one another, what we value as a society, and what organizations owe their people.

As we mark the week of 9/11, their reflections underscore that the meaning of work often becomes most visible in moments of crisis, and that the choices we make about work ripple far beyond ourselves.

Resources for Further Exploration

  • Is Your Work Worth It? How to Think About Meaningful Work (PublicAffairs, 2024) [link]

  • The Meaning and Purpose of Work (Routledge, 2025) [link]

  • Connect with Jennifer Tosti-Kharas on [LinkedIn]

  • Connect with Christopher Wong Michaelson on [LinkedIn]

The Risks and Rewards of AI for Well-Being: Lessons from Llewellyn van Zyl

On this episode of Meaningful Work Matters, Andrew speaks with Llewellyn van Zyl, a positive organizational psychologist and data scientist who is reshaping how we think about employee well-being.

As a professor at North-West University and Chief Solutions Architect at Psynalytics, van Zyl combines deep expertise in positive psychology with hands-on experience in analytics and machine learning.

In this conversation, he explains why traditional top-down models of well-being often fall short, and introduces a bottom-up, person-centered approach that treats every individual as unique. He also explores how artificial intelligence might help scale these insights, what risks and ethical concerns come with that, and what it all means for the future of work.

Why Top-Down Models Fall Short

For decades, much of positive psychology has relied on “top-down” models of well-being, such as the PERMA framework. These approaches assume that experts can define the components of well-being, design measurement tools, and then apply them across contexts.

While useful for prediction and creating shared language, van Zyl argues that these models break down in practice.

The problem is that what counts as well-being is not universal. A framework developed in the United States may not hold in Saudi Arabia, or even across subgroups within the same country. In one study, half of the psychological strengths identified by LGBTQ+ participants did not appear in the VIA Strengths model. Pride, for example, emerged as a critical strength, even though psychology often labels it as a vice.

Context matters, and averages fail to capture the lived realities of individuals.

Top-down models also treat well-being as static, impose narrow categories, and depend on self-report measures that often mask what people are actually experiencing. Van Zyl points out that someone may show all the physiological signs of burnout while still rating themselves as “a little stressed” on a survey.

A Bottom-Up Approach to Well-Being

Van Zyl offers an alternative: a bottom-up, person-centered perspective.

Instead of imposing categories from above, this approach starts with the individual and builds outward. He describes eight principles that make up this way of thinking:

  1. Every person is unique. Each individual is a case study of one, shaped by their own history, drivers, and experiences.

  2. Context matters. External conditions directly influence well-being.

  3. We are embedded in systems. Families, workplaces, and policies both shape and are shaped by individuals.

  4. Well-being is a process, not a snapshot. It unfolds over time and fluctuates in response to experiences.

  5. We need multiple perspectives. Stories, objective indicators, physiological data, and even traditional surveys all contribute to a fuller picture.

  6. It is co-created. Insights emerge through dialogue between individual and practitioner, not imposed by one on the other.

  7. Meaning is personal. The same factor can enhance one person’s well-being while detracting from another’s.

  8. Validation happens at the individual level. What matters is whether the model resonates as true for the person themselves.

These principles make clear why one-size-fits-all solutions rarely succeed. As van Zyl explains, the same experience can mean opposite things to different people. For one person, waking up at 3am may be a sign of insomnia. For another, it is a cherished time to join a Bible study group in another timezone and feel connected to their community.

Can AI Help?

If bottom-up approaches are the most accurate but also the hardest to scale, how can they be applied in practice?

This is where van Zyl’s work with artificial intelligence comes in. By analyzing thousands of personal narratives, his team has trained models to identify themes, cluster experiences, and even predict outcomes like burnout with surprising accuracy.

For example, language analysis showed that certain “high risk” words correlated strongly with burnout. Sentiment analysis revealed that the balance between personal demands and resources could explain additional variance. Taken together, these models could estimate burnout risk without relying on traditional surveys.

The goal is to create hyper-personalized assessments that capture what uniquely drives or detracts from each individual’s well-being.

Eventually, these assessments could generate equally personalized interventions, though van Zyl acknowledges that designing content to match each person’s profile remains a significant challenge.

Risks, Ethics, and Boundaries

As powerful as these tools can be, van Zyl and Soren emphasize the risks.

AI can deepen inequalities, erode uniquely human skills, and displace the personal connection that is central to care. There are also profound ethical concerns around data ownership, transparency, and consent.

Van Zyl stresses that individuals must remain at the center. People should know what is being tracked, why it is being used, and how long it will be stored. Without informed consent and clear ownership, even the most sophisticated models risk becoming exploitative.

Soren connects this to lessons from Sara Wolkenfeld’s episode, where she draws from Jewish teachings about the difference between rote work and sacred work. In the same way, we must decide what tasks we are willing to outsource to machines, and what forms of “sacred work” (such as, creativity, ethical discernment, human connection) we must hold onto ourselves.

Designing the Future of Work

Van Zyl believes AI should be used to augment, not replace.

It can handle pattern recognition, data analysis, and repetitive tasks, freeing people to focus on the parts of work that truly require human judgment and care. The challenge is not just technical but ethical: ensuring that these systems are designed intentionally, with safeguards to prevent harm.

As Soren reflects, the future could go two ways.

Left unchecked, AI might reduce work to hollow oversight of algorithms. But with thoughtful design, it could expand access to care, provide individualized support, and elevate the human aspects of work that matter most. He describes it as letting AI serve as an assistive tool, like a bionic arm that extends our capabilities, rather than a replacement for the uniquely human skills that give work meaning.

Key Takeaways

  • Top-down models of well-being overlook cultural and personal differences.

  • A bottom-up approach treats each person as unique, embedded in systems, and evolving over time.

  • AI and machine learning can help scale these insights, but they raise risks around ethics, dependency, and dehumanization.

  • Context and meaning shape how the same experience impacts well-being.

  • The future of work depends on using technology to enhance, not replace, what makes us human.

This conversation challenges assumptions about how we measure well-being and invites us to think critically about the role of AI in the workplace. By blending rigorous critique with a vision for what is possible, van Zyl offers both caution and inspiration for designing the future of meaningful work.