Before the AI takes our jobs, can it help us learn?

Jeff Dieffenbach


AI in the Service of Learners and Learning

Mired in a problem with no obvious path toward a solution? Fear not, the conventional wisdom says, some combination of blockchain, the Internet of Things, and/or AI will bail you out.

Well, if that problem is learning, or more to the point, hurdles that block learning, AI may indeed help you navigate your way to better understanding. How might this work?

Right up front, let’s tackle the question of privacy that’s inevitably going to arise. To make the ideas that follow a reality, a hypothetical “learnerAI” is going to need to know about you. A LOT about you. For the purpose of this exercise, then, let’s stipulate that you, the learner, have complete awareness and control of your learnerAI. It’s there to HELP you, NOT to report ON you.

To aide your learning, what might your learnerAI need to know about you?

  • Your basic physiological information via biosensors
    • Heart rate, pulse, pupil dilation, galvanic skin response, attention, …
  • Your interests via direct survey
    • Technical topics, operational topics, financial topics, leadership topics, …
  • Your current work via job description and email, text, phone, and web monitoring
    • Roles, projects, tasks, …
  • Your career history via resumes, LinkedIn
    • Roles, projects, tasks, …
  • Your learning history via direct survey, calendars, stored credentials, and more
    • Degrees, programs, courses, conferences, books, articles, webinars, …
  • Your desired career future via direct survey
    • Roles, …
  • Your desired learning future via direct survey
    • Competencies, skills, …

Yes, your learnerAI needs to know a LOT about you.

With that information (and more, no doubt) in place, your learnerAI will be in a position to help you overcome problems at learner, instruction, and/or policy levels (a framework developed by the MIT Integrated Learning Initiative). We’ll address solutions at each of these levels in turn (in blue).


If a learner has the right prior knowledge, motivation, interest, and physiological readiness, he or she will be in a good position to learn. The learner isn’t always in the best position to judge the state of these conditions, however.

The learnerAI can help.

  • Prior knowledge: the learnerAI will use information about the learner’s prior learning experiences to make appropriate matches with content based on the learnerAI’s assessment of the level of that content.
  • Motivation: as used here, motivation reflects external reasons for learning—to earn a credential, receive a promotion, or otherwise further a career. The learnerAI will provide motivating context to the learner by mapping the content of a learning experience to current and future roles of importance to the learner.
  • Interest: in comparison with motivation, interest is intrinsic–typically, a learner either does or does not like a topic, although interest may certainly increase with familiarity and expertise. The learnerAI will draw on learner-expressed interests to match with the content in a learning experience when skill or process (and not content) is the primary learning objective.
  • Physiological readiness: The learner needs to be well-rested, well-fed, otherwise in good physical shape, and as important, in a good mental state. The learnerAI will detect these conditions and align learning experiences with the learner’s physiological receptiveness.


If the instruction has the right content, delivery, and assessment, the odds of effective learning improve.

The learnerAI can help.

  • Content: Content includes the breadth, depth, and accuracy of the subject matter and the production value with which the subject matter is configured. Content may cover knowledge and/or skills. The learnerAI, knowing a lot about what’s relevant to the learner, will uncover content addressing one of three needs: the learner knows that content is needed; the learner’s manager, peer, or subordinate knows that content is needed; and most important, none of the aforementioned parties knows that content is needed.
  • Delivery: Delivery variables include human vs. digital, synchronous vs. asynchronous, duration of learning, user-requested vs. pushed-to-user, and device through which the learning is consumed. The learnerAI will look at past successes and current needs to help deliver content when it’s most valuable. That timing might be in advance of a learning need (“learning before performing”) or in just-in-time fashion at the time of need (“learning while performing”).
  • Assessment: Assessment has to do with the frequency and depth with which the learner is assessed and the formative manner in which the responses are used to guide the next piece of content and delivery. A robust learnerAI will add fidelity and precision to the adaptive-branching inherent in effective “What’s next?” learning. Moreover, the learnerAI will track and record learning, both formal and informal based on the learner’s actions, calendar, and other input.


If the policy has the right law, access, funding, leadership, and measurement, the prospects for effective learning are better still. The learnerAI can help.

  • Law, access, and funding: Laws and regulations must be conducive to learning. Learners must have access to learning. This isn’t access via funding, but rather, access via circumstance. For instance, a job role that isn’t eligible for a particular type of training or isn’t in the right location for that training isn’t helped regardless of how good the training is. Finally, funds have to be available to pay for the learning experience. These funds may be provided by the learner or the provider of instruction. These three areas are included for completeness, but don’t really lend themselves to improvement via AI.
  • Leadership: Leaders of those providing instruction must have a philosophy conducive to/supportive of learning. Here, the learner AI WILL help–it can be tuned, and tune itself, to learning opportunities (content and delivery) in line with executive mission statements and vision and management implementations of those missions and visions.
  • Measurement: Measurement as used here is different from the formative level of assessment guiding the learner’s next instruction. Rather, it addresses the summative level of what learners come to know, how their behaviors change, and what organizational improvements those behavior changes drive. The learnerAI, through its tracking and reporting, will assist learners in demonstrating the value to their employer of the learning they undergo, thereby establishing a case to be made for future learning.


Learners will equip themselves with a learnerAI guide. Ideally, the learner will own this guide the way they own a LinkedIn account–it will not be provided by … and therefore accessible to … the learner’s employer. While learners may still feel pressure to share learnerAI data with employers and especially prospective employers, the learner ownership will mitigate the intrusiveness.

The learner will own, control, and tune the learnerAI to monitor physiology, job descriptions, to-do lists, communications, actions, and more. As a result, the learnerAI will assist with desired, assigned, and unanticipated learning. This learning will be delivered optimally: when the learner is ready to learn and/or when the learning is of the most value. In the background, the learnerAI will record learning for the learner to share when that sharing best benefits the learner.

Is personalized learning a problem of privilege?


Paul Emerich France, a National Board Certified Educator, reading specialist, and classroom teacher in Chicago, wrote the article “Personalized Learning is a Problem of Privilege” for EdSurge. I’ll use this post to share my thoughts (in indented green).

– – – – – – – – – –

Personalized Learning Is a Problem of Privilege
By Paul France | Jan 21, 2018

Education is a dynamic space, with new trends constantly ebbing and flowing, the pendulum swinging back and forth between truly new innovations and recycled ideas. Experienced educators will recognize these patterns. But for younger teachers like me, whose careers are still in their infancy, it’s not so easy to see through the blinders.

When I moved to the Silicon Valley in 2014, I, like many, joined the gold rush to pursue this idea called “personalized learning.” I thought it was a panacea. I truly believed that tech-powered personalized learning could be the answer education was waiting for.

Call that what you want—a misguided, naive, idealist, arrogant optimism. Perhaps the idea of personalized learning as a panacea is all of those things—or none of them. But I’ve come to learn that the label “personalized learning,” or whatever the next big thing is called, doesn’t matter. What matters more is challenging the underlying assumptions and social structures that breed inequitable ideas that do not serve what teachers and students actually need.

I don’t regret my time in the Valley, though. It taught me some important lessons that I will take with me for the rest of my career. After three years there, here’s what I’ve learned.

I’ve learned that personalized learning doesn’t necessitate technology use.

We often conflate individualization with personalization. To sustainably individualize every child’s education, it helps to have the assistance of a complex technological algorithm to assign activities to children. But this happens at a cost. Using an algorithm to determine what children see is impersonal and dehumanizing. This approach focuses on consumption of educational material instead of interaction with meaningful provocations.

Without definitions, it’s impossible to gauge Mr. France’s use of “individualization” and “personalization” and how they differ from one another. That said, “assistance of a complex technological algorithm” is not necessarily the same thing as “using an algorithm to determine what children see.” First, the best definitions of personalizing learning are founded on educators being supported by technology, not replaced by it. Second, there may be areas where technology does an equal or better job (self-driving cars are a good analogy here), freeing up the educator to focus his or her time and energy in other ways. There’s nothing in any reasonable interpretation or implementation of personalized learning that “focuses on consumption of educational material instead of interaction with meaningful provocations.”

For context, let’s again turn to the Center for Collaborative Education’s definition of personalized learning.

Personalized Learning tailors the educational experience for every student by embracing individual strengths, needs, interests, and culture, and elevating student voice and choice to raise engagement and achievement. Personalized learning takes place within the context of educational equity, providing culturally responsive learning environments and equitable educational opportunities for all students.

I see why this way of thinking prevails, though, as I used to subscribe to it. The scope of skills taught in schools is relatively narrow, and at first, it’s reasonable to assume that the right arrangement of activities on a playlist, or the correct sequence of Khan Academy videos, could meet the needs of all children.

I’ve come to learn that this way of thinking is reductive, at best. It’s simply a more sophisticated version of an industrialized model for education, moving kids through a customizable assembly line, adding quizzes, games, and videos at different rates and in different orders.

It’s important to recognize that not all technology is bad. Tools that minimize complexity, make educators more powerful, connect individuals, or redefine learning tasks can contribute to a more personal learning environment. Tools like Seesaw help children create multidimensional digital portfolios and let their parents partake in their learning journeys; apps like iCardSort and Popplet allow children to explore abstract thinking; programs like Google Earth and Skype can connect faraway people and places, redefining what sorts of experiences can take place within the four walls of the classroom.

It’s just important to remember to ask ourselves why we’re using technology, and to make sure that it is making learning personal by amplifying our humanity, not limiting it.

Second, I’ve learned that personalized learning is a problem of privilege, and that education’s problems are mainly systemic.

Technologists and their wealthy funders often hypothesize that the problems afflicting education can be amended through digital tools. But many sometimes fail to acknowledge the role that privilege and inequity play in perpetuating injustice, and instead presume that tech tools that individualize will “close the achievement gap.” Schools in affluent communities can access these technology tools easily, while schools in low-income areas—and which, generally speaking, disproportionately serve communities of color—do not have access to these tools. But even if they did, I think they’d find that personalized learning is not a need at all, and that there are more pressing matters to address.

France’s argument here better supports the case for making effective personalized learning available to all than it does for suppressing the benefits of effective personalized learning for the privileged.

Many “personalized learning” tools don’t fulfill real needs. Rather, they serve perceived needs that have been fueled by privilege. Parents don’t need immediate, real-time updates on their child’s progress, and they don’t need their child’s education to be individualized. Modern society’s desire for instant gratification and boundless transparency has convinced us that these are real problems, when in reality, they’re simply socially constructed preferences.

I agree that parents don’t need “immediate, real-time updates on their child’s progress,” but I can’t see why they (and their children) wouldn’t benefit from effective individualized education (which is not remotely the same thing as “boundless transparency”).

What children need more are well-trained, well-compensated teachers who work in emotionally-safe environments where sustainability and humanity are valued above all else. But most schools are hardly able to pay teachers equitably, nonetheless train them to hone their practice, develop engaging curriculum or even use existing technologies effectively.

What if all the billions in private capital that support the edtech industry were matched by an equal commitment to supporting our educational infrastructure? I’d like to see that kind of money invested to create a sustainable system for teaching and learning, one that actualizes a democratic vision for education by combating privilege and promoting equity within and between schools.

Schools spend something on the order of 5% of their budget on curricular materials (including edtech and infrastructure). Even in a tech-intensive environment, that number won’t top 10%. Shifting a portion of that spend to support “our educational infrastructure” doesn’t provide enough funding to appreciably move that needle. And, technology offers the promise of helping to reduce the bigger piece of the pie—educator cost. With good edtech, I argue, a smaller number of better supported teachers will be able to get better outcomes at a lower cost. Are we there yet? No. Should we try to get there? Yes.

By neglecting to do so—and by choosing to invest in technology instead of people—we only deepen the divide between school districts, perpetuating compounding cycles of privilege and oppression that will only continue to widen the gap between high- and low-income schools.

Most importantly, I’ve learned that we need to work together.

There is no panacea or silver bullet that will solve the great problem of education. Relying on venture capitalism to solve perceived problems through tech-powered personalized learning only perpetuates systems of inequality, especially if only schools in high-income, predominantly white areas can access them.

No one is proposing that we rely on venture capitalism to improve our schools. Schools will improve our schools, in part through the traditional and digital education materials that they purchase. The funding model behind the organizations that offer these materials is immaterial.

No one idea, product or organization will be able fix it alone. This is the danger of the capitalist, winner-takes-all hero mindset. It hardwires self-interest within us, a self-interest that made me want to work in Silicon Valley. I wanted to be a 21st-century knowledge worker, and I wanted to hit it big by doing something cool in technology. Blinded by my own privilege, self-interest got the best of me. I focused too much on success in the education technology world and, as a result, began to lose fulfillment in the day-to-day of teaching. I felt disconnected and disempowered, and it was because I lost perspective on what really mattered.

We need to let go of the self-interest that capitalism has instilled in us. We need to work together and support each other, not perpetuate a theory laden with privilege for the purpose of capital gain.

In actuality, it’s the system that’s broken—not necessarily the people in it. I met incredible, intelligent people in Silicon Valley: teachers who were passionate, creative, and knowledgeable; technologists who thought radically differently than I did and pushed my thinking about what was possible in the classroom. But privilege and a capitalist mindset clouded our understanding of which problems really need to be solved in education.

It’s a well known adage in Silicon Valley to “fail fast.” As I tell my students, there’s nothing wrong with failing and being wrong, as long as you make a change and avoid making the same mistakes repeatedly. Education technology needs to learn from its mistakes, and I believe that getting back in touch with the principles of human-centered design will help education enthusiasts get back in touch with what really matters in schools. After all, people who know better, do better.

The central purpose of personalized learning–as implemented by educators, not providers of educational materials–is to get in touch with human-centered principles.

To be fair, educators also need to learn from their failures. One example is elementary reading. There’s clear research that shows how intervention can work best to help struggling readers. But too many reading teachers instead choose a “whole language” model that’s proven to be less effective.

It’s a flawed approach to criticize the new thing without taking a hard look at how well the old thing is working (or not).

To France’s original question, is personalized learning a problem of privilege? In my view no. Implemented well, personalizing learning offers the promise of helping to level the playing field, not tilt it.

Personalized learning needs to die (or does it?)

no-grim-reaperThis post’s title notwithstanding, let’s get this out in the open right away: I’m a huge proponent of personalized learning … or as education thought leader Michael Horn puts it in verb form, personalizing learning.

I drew the title from the title and lead line of a Reimagine Education article by Associate Professor Michael Kasumovic of the University of New South Wales in Sydney. I’ll use this post to respond (in indented green) to the “personalized learning needs to die” case made by Professor Kasumovic (who is also the founder of edtech company arludo).

– – – – – – – – – –

Personalized learning needs to die
Learning and teaching will change in the digital age. Let’s make sure we shape it right.
By Michael Kasumovic | undated, but likely Nov 2017

Personalized learning needs to die.

I could sit here and argue that it needs to die because companies are trying to replace our most valued resource – teachers – with computers and algorithms that will sterilize learning. I could also easily argue it’s because privatizing education and giving learning data to closed corporations are the biggest mistakes we’ll ever make. But although I think these are both very valid reasons, I’m saying this for a much simpler reason: personalized learning is never going to work and we’re wasting our time.

This is an overly harsh characterization, ironically coming from the founder of a company that describes it mission no different than countless others in the space, including the ones that he paints with his broad brush. Yes, companies are trying to sell products and services. But that’s been true for decades—we used to call them books. You could just as easily say that companies are trying to provide teachers with better tools. And, the extent that those tools might allow a smaller group of teachers to serve a larger group of students, that lowers cost. And cost is big challenge in education.

As an evolutionary biologist, I know a thing or two about why we behave the way we do. If you look at our evolutionary past and our current society, it’s clear that humans are social creatures. This is one of the reasons that social games and apps have the most users and continue to be so popular. We crave connectivity with one another, for better or for worse. But personalized learning is the antithesis of social connectivity because it encourages isolation during one of the critical, formative periods of our lifetime.

Schools have been assigning homework for decades. Homework has mostly been a solitary task. And one whose efficacy isn’t supported by evidence. So, it’s not as if personalized learning is taking something that’s always social and making it less so. I agree that some, maybe even a lot, of social learning is valuable. But personalized learning isn’t antithetical to social, and it certainly isn’t about always being solitary.

I can see why people may think that a personalized approach can have an enormous benefit for learning. Imagine a student that is under- or over-performing in class, and I’m sure that you’ve imagined that they are bored and disengaged because they are either over- or under-challenged. Imagine then, that we could moderate which learning tasks an individual student receives so they are in that perfect zone – this flow – where they are perfectly challenged by an algorithm that knows exactly what they need. They are therefore driven to continue to want to learn. An appealing picture is thus painted– one in which every student can flourish.

But the reality is much different. Learning alone is daunting unless you have that internal drive that only someone with experience can have. We also know what happens to kids that are isolated as we’re seeing it more and more in this digital age: they become depressed.

Again, this assumes that personalized will have learners always be solitary. I don’t think they have to be any more solitary with personalized learning than they’ve been with homework.

At the same time, learning alone doesn’t prepare anyone for a job in the future as there is no job in the world where employees work alone. And as our future becomes more diverse, so will the teams we work with, meaning that social and networking skills will be of utmost importance. If our current political climate demonstrates anything, it’s that in their formative years, we should be spending more time socializing students so they realize the diversity of backgrounds people come from and what that means for the future that we’ll have together.

There are many jobs where employees spend considerable amounts of time working alone. That said, the socialization goals that Kasumovic lays out are laudable.

But there are two things that bother me most about personalized learning. The first is that machine learning algorithms can only be as good as the data that are used to train them. Early data by companies are often from a particular group of students – affluent and white. This isn’t because these companies necessarily target these groups, it’s because the schools that house these groups are the ones that can most afford to try something new and different. So what does it mean when early data collected only represents how a fraction of the population thinks and learns?

I don’t know how true the “affluent and white” statement is. And I don’t know which half of it is more relevant (my guess is “affluent”). Still, that’s a relatively straightforward fix to make–train machine learning algorithms on data sets from diverse groups of students.

And the second aspect is that an individual is more than the sum of their decisions and how they respond to a particular question at a particular time. That’s because the factor that underlines all these aspects is the thing that makes us human – our emotions. Our emotions alter how we behave, perceive the world, and perform. We could be the smartest person in the world, but if we are feeling down about ourselves, we can struggle to get through the day.

So, a good learning experience would be one that takes into account both the learner’s general ability and their current readiness to learn? That sounds, well, personal. Perhaps I disagree with Kasumovic because we have different definitions of personalized learning in mind. Unfortunately, in this regard, we’re left to try to read his mind (personal means solo learning guided only by a machine?), as he offers up no definition to support his case.

I’ve encountered numerous good definitions of the term–one that I particularly like is that of the Center for Collaborative Education:

Personalized Learning tailors the educational experience for every student by embracing individual strengths, needs, interests, and culture, and elevating student voice and choice to raise engagement and achievement. Personalized learning takes place within the context of educational equity, providing culturally responsive learning environments and equitable educational opportunities for all students.

This is why having a human component in education is so important. Teachers, through a capacity for empathy amplified by years of experience with students, know when students need a hug more than anything else. They can tell that when a student is being disruptive, it’s not because they want to spoil others’ learning, but because it’s a cry for help. Humans are very in tune with one another’s emotions and empathy is sometimes what’s needed more than anything else.

I agree. But, as we’ve seen in medicine, machines are sometimes able to pick up things that people can’t. So why not craft a system that combines both?

By now you’re probably asking yourself: why should we bother with technology if it’s not going to help our students learn? I think technology can do this, just not in the way we’re thinking about it currently. And I think that technology can make the biggest impact in science by taking the fear away and replacing it with wonder, making the intangible clear, and speeding up the scientific process.

Kasumovic’s article continues with a non-objectionable description of children as “natural born scientists” before sharing his equally non-objectionable (and arludo-based) “vision of the future of education.”

Both accounts appear to lean to considerable extent on learning being, well, personalized.