choice Magazine

Beyond the Page ~ The Role of AI in Coaching

June 27, 2023 Garry Schleifer
choice Magazine
Beyond the Page ~ The Role of AI in Coaching
Show Notes Transcript

In today’s episode, I’m speaking with author Andrea Paviglianiti who is the author of an article in our latest issue “Technology & AI ~ Will it support or replace human coaching?" His article is entitled:  The Role of AI in Coaching: A New Frontier

As AI adoption continues to grow, coaches and consultants may wonder if it will be their number one competitor or their ally, but it’s worth exploring the potential of advanced AI models, such as GPT, to enhance coaching through dialogue, domain-specific expertise, and continuous improvement. Specifically in the Americas, a mentor coach may focus primarily on upgrading the coach’s skills from a technical perspective. A coach supervisor, however, explores the various aspects of self-awareness, relational dynamics and systemic dimensions, to help uncover invisible influences that may hinder the quality of their coaching and expressions of those skills.

As technology advances, AI is increasingly becoming a buzzword in the business world, and its role in coaching is growing in importance. AI is being utilized in a wide range of industries. Now, this new technology is also being incorporated into coaching, allowing coaches to improve their performance and enhance their human connection with clients.

In this podcast, we will discuss how AI could be used in coaching to improve the coaching process and the impact AI will have on the future of coaching.

 Andrea Paviglianiti  works at IBM as a Data Scientist, co-creating Data & AI solutions with clients across Europe using cutting-edge technologies. 

He is especially interested in Trusted AI and AI Ethics.

Privately, he focuses on learning and its challenges, looking for innovative and more efficient ways to master knowledge, skills, and critical thinking. 

He regularly posts on his website "The Key To Think" about long-term learning strategies like situational awareness, mindfulness, observation, memory training, and introspection.

Andrea's mission is to help people have clarity in what they think and purpose in what they do, using learning as a means to grow and achieve their goals; possibly, merging his two professions. 

He is the author of "New Maieutic: An Introductory Guide on the Philosophy of Dialogue for the Personal Development Through Coaching" and is out to write more.


Watch the full interview by clicking here.

Find the full article here: https://bit.ly/A-Paviglianiti

Learn more about Andrea here.

Connect with Andreahere:   adding the password CHOICEMAGAZINE_v21n2 to schedule anything up to 1h30 (it can be a coaching session, a chat, a consultation... anything). Time can be split.

Grab your free issue of choice Magazine here - https://choice-online.com/

In this episode, I talk with Andrea about his article published in our June 2023 issue. 

Garry Schleifer:

Welcome to the choice Magazine Podcast, "Beyond the Page". choice, the magazine professional coaching is your go-to source for expert insights and in-depth features from the world of professional coaching. I'm your host, Garry Schleifer and I'm thrilled to have you join us today. In each episode we will go, believe it or not, "beyond the page", and dive deeper into some of the most recent and relevant topics impacting the world of professional coaching, exploring the content and interviewing this talented mind behind the article and uncovering the stories that make an impact. choice is more than just a magazine. For over 20 years, we built a community of like-minded people who create, use and share tools, tips, and techniques to add value to their businesses and of course to impact their clients, because that's what we all want, right? In today's episode, I'm speaking with data scientist, I love that title, Andrea Paviglianiti who's the author of an article in our latest issue, "Technology and AI: Will it support or replace human coaching?" The article title is the Role of AI in Coaching: A New Frontier. Andrea works at IBM as a data scientist, co-creating data and AI solutions with clients across Europe using cutting edge technologies. What I really love about him is that he's especially interested in trusted AI and AI ethics. Privately, he focuses on learning and its challenges, looking for innovative and more efficient ways to master knowledge, skills and critical thinking and you'll read it in the article. We're not going to go over the whole article. He regularly posts on his website "The Key To Think" about long-term learning strategies like situational awareness, mindfulness, observation, memory training, and introspection. Andrea's mission is to help people have clarity in what they think and purpose in what they do, using learning as a means to grow and achieve their goals; possibly, merging his two professions. He's the author of, New Maieutic: An Introductory Guide on the Philosophy of Dialogue for the Personal Development Through Coaching. And stay tuned, he's on his next one already. He's going to be writing some more as well as having written the article for us. So thank you so much for writing the article, Andrea, and for being with us today. Welcome. Thank you .

Andrea Paviglianiti:

Thank you, Garry.

Garry Schleifer:

You have a very interesting background. But I really must say what really stood out for me is that you are committed to or interested, and I'm getting committed because I read it in the article, trusted AI and AI ethics. Now why did you decide to write for us?

Andrea Paviglianiti:

Well, it's very simple. It's not the first time in trying to write for you actually. It just that all the other topics, I didn't really manage to write something valuable , but then this issue came, right? The role of AI in coaching , using technologies to aid coaching business and I happen to be a data scientist and I also happen to be a coach. So I say that that's my sweet spot. I cannot lose the opportunity and went through with it. That is how it comes.

Garry Schleifer:

And here you are. Well, thank you. Very, very informative. I mean, some of the things that I read in there, comforting, thought provoking, but I'll let people read the article. But one of the key things, and you wrote about this, coaches are worried, founded or unfounded, will AI replace coaching?

Andrea Paviglianiti:

I don't think that is possible. I don't think that you can replace coaching because coaching itself is a process that goes through human communication and that encompasses not only questions like powerful questions, it encompasses much more. For example, you are going to analyze that you want or not, that you are conscious or not, all the emotional spectrum of person, true non-verbal and para verbal communication, which we know we can amount up to, like more than 60 to 80% of the wall of communication in between two people or more people. And there is even more, right? My belief is that coaching is none other than a form of dialogue. And a dialogue is what happens in between two people in order to provoke ideas, to create something new, to raise awareness. Now, there are two things that I think AI doesn't have it yet. The first is critical thinking. Now, you know, critical thinking goes through a process of analysis, celebration, finding patterns, evaluate those patterns whether they are true or not, and finally get to conclusions. Now, AI is amazing at analyzing and finding patterns, but what it is not ready yet to do is to extract insights. It can infer information particularly from large amounts of data. What it cannot do is to get to the general from a particular, because you need experience to do that. And this is what we do with trained observation and subsequent deduction process. This is one of the things that AI, right now, is missing. Sue, we have emotional insight extraction that you can have when you write some texts. It's something very popular with feedback. Something that IBM, for example, was implementing. You can extract hints of joy, anger, sadness from a text, but you do not have, and actually right now you cannot even because it's unethical , like analyze the face , the emotions like the emotional spectrum. You need to extract those from the slangs, from the voice pitch . There are so many information that go through it that are going to modulate the conversation and thereby coaching. So this is one of the things that I think AI can do is to aid the coachee or the coach, especially in the training of the coach, to provide a different kind of feedback. Something that can start new ideas, right? So for instance, what do we have? There is usually peer coaching, coaching triads and coaching supervision, coaching supervisor. AI can help there. Then there is another thing that , this might be a bit sci-fi to hear , right? But I have it here. So, what distinguishes us the most, or at least from AI and most importantly, from what I think will be a complete form of AI, it's what I consider the X factor for people and it is the ability to worry. Because when you worry you introspect, and when you introspect, you revise what you know, you worry that you may not have the complete spectrum. So it's not worry in a form of fear, it is a worry in a form of admitting ignorance. Now, if you use ChatGPT, for example , which is right now it is generating high, many times and does what we call technically hallucination. So when when it doesn't know ChatGPT invents stories.

Garry Schleifer:

I've heard of that. I've heard of that. Like make up names of books to support an article or something like that. It was crazy.

Andrea Paviglianiti:

Yes. I was interrogating ChatGPT about movie content and then relationship of characters in the movies or books or TV series and was coming up with, well, let's say parallel universes.

Garry Schleifer:

I still like hallucinations, but parallel universe is another great way of saying it.

Andrea Paviglianiti:

Yeah. You cannot right now create , well, it'll be debatable or not, whether we call it creativity. Maybe it's a creative process and we don't know yet. But this is something we have, right? And this modulates, as well, our communication and thereby comes also the idea of trusted AI or ethical AI, which is what you mentioned that you want to hear more about.

Garry Schleifer:

Yeah. That's interesting you should say that because I'm using the tool. I'm testing it for my own purposes, and I find that you can get, depending on what you put in, like garbage in, garbage out, the more specific you are about what it is you're looking for, the better the result comes out. And we've had a couple examples that we used here at choice for some background stuff, supportive, and it was about 70 to 80% useful. And so what it made me aware of is that it's still required, I'm going say, expert. So you know, I'm the expert reading this because I know what the end use will be for the particular topic that I chose the ChatGPT to look into. And I can't help but believe that what you're inferring is the same thing. One of the questions I was going to ask you, and maybe we could use this in alignment with this part of the conversation, is do you envision these technologies to train coaches? So if you take mentor coaching, coaching supervision, all this sort of stuff, and the fact that there's outlying hallucinations, how accurate could AI be without the support of a human to edit the final result before it's used? Is that too much of a question? You get the gist?

Andrea Paviglianiti:

No. It's actually an interesting one. So the short answer is with the right data governance, with the right garbage in , garbage out best practices, you can get an AI, which is, let's say, more focused on topics and knowledgeable on topics. Now ChatGPT, I mentioned actually is for all this kind, which are called the large language models, or even like general models, foundational models. So those are trained on a vast amount of data and this data may be correct or not. So that implies that there is error . Now an AI is as good as the quality of the data it's trained with and it shouldn't come as a surprise that human bias is propagated to the data we've collected because we collect through observation. And since we are biased ourself , there is a nice book about that actually, "Thinking Fast and Slow" from Daniel Kahneman, that speaks a lot about those biases. You realize actually how incomplete we are , and thereby we cannot expect AI to do better yet. For example, yes, has a very high rate of accuracy most of the time . Indeed, what you should worry about is that inaccuracy part that even if it's that 1%, it can be like something that stays for us. In this regard , goals then the technical question. Now , we have seen AI evolving. I don't know if you remember, in the early 2000, there was already some chat bot out there that could answer you through some questions and could even like , let's say, prime you in a way. MSN Messenger was the chat b ot d irector.

Garry Schleifer:

Okay.

Andrea Paviglianiti:

And that was early 2000. I just was a teenager back then.

Garry Schleifer:

A baby.

Andrea Paviglianiti:

But back then those kind of services were more like a game.

Garry Schleifer:

Entertaining, right?

Andrea Paviglianiti:

Yeah . So we got afterwards, Cortana, Siri , Alexa, right? They provide service to you and ChatGPT is a new advent. It's more human language like. This allows us to do much more. It can program for us, it can conversate with us, and we can learn through it , which is one of the reasons I like it a lot. I use it to learn. So could it be used to train coaches? I'd say that it could be used to train coaches for proposal feedback. So before I mention some stuff like , emotional insights for example, maybe what could help with is if you have some homework, some assignment, right? And the assignment says that you need to test powerful questions, and it could come back with feedback. So it can simulate a coaching conversation. But again, it's limited to the way we elaborate the questions. So rather than all the rest that I said, so we have only the verbal communication. We may have some emotional insight , but it's not that good yet, in my opinion. So we need to be very careful because this may cause also some bias to create unskilled coaches, or even worse, you may have bad AI, which actually depending on who is in front of it, i s going to assess the person differently, not because of the skill, but because of underlying issues which usually are features that you may not notice. Voice s peech, namely. Women have a more elevated voice speech. In men voice, a n elevated voice speech is maybe a symptom of nervousness. So it may associate elevated voice speech with nervousness, and therefore it generalizes all women are nervous when they speak. That's a serious problem actually because you get a bad evaluation for female coaches, which actually might be excellent. Um , and more on that actually is explained in another book that I was reading from Caroline Criado-Perez, and it's called Invisible Woman . So it talks especially about the gender bias, which is extremely interesting. It can happen to be this or then maybe this slang or semantic patterns that we use now and those can vary from subculture to subculture. So also there is a need for an ethical AI, right? Ethical, transparent, and explainable. So only then it can be trustworthy. So only then you can Yeah . Really , uh, fully trust AI to do specific task . Until then you need always, anyways to monitor how AI operates. You need somebody which does that.

Garry Schleifer:

Yeah. I really love that we did this issue and that we got these articles and that we're putting this out and talking to you about this sort of thing. Because fear usually comes out of, and I won't say ignorance as in ignorant but ignorance is in lack of knowledge, and the more I know, the less, well , I never really had fear about AI, and I definitely don't have fear that AI is going to take over coaching. Will it support coaching? Yes. But like you said, there's ethical responsibilities, there's moral responsibilities, there's bias. And I don't know what that fits into either one of those categories, but it still requires somebody to take a look at the finished product before it gets used to determine whether or not it's correct. Like you say, it has hallucinations, what was the other word you used? There was another one. Anyway, it makes up stuff to fill gaps. So you need a final eye . It's like when we edit the magazine, we look at it at least three times with at least four pair of eyes. And I don't know if we still don't have one typo in the magazine, but we're a lot closer. But it requires, someone like me who is an expert in coaching, not the expert, just I've done it for 22 years, so I know contextually what I'm looking for, but AI was a little different, right? But we want to make sure that the content we're delivering, like what we're getting from AI is truthful, and I can't even imagine what the rest will do. But going forward, and now this is a key that I noticed in your conversation, both in the article and here, keep dropping in the word "yet". What do you mean by that? What's coming down? What's "yet" mean?

Andrea Paviglianiti:

Yet means that if you look at it right, the technology is evolving in such exponential way that we may expect at one point, maybe even at the end of this decade, that AI will be much, much, much more improved. I mean, even since the release of ChatGPT on the so-called GPT-3 model, which was in November, I think I recall correctly. Now we are already at GPT-4 so it's supposed to a better job. AI, of course, it gave a push because it caught the interest worldwide and it caught the interests of business. It caught also the interests of governments. And because of that , now finally people are looking into the ethical problems, the ethical questions of it, much more seriously that has been done till now. Until now, it was mostly on a research and development area, right? So only experts will look at that, will debate on that, and will work on it. Now, you have different kind of experts getting in. So you have philosophers and social scientists, which you employ to make sure that AI works good and with good, I mean, to serve people fairly and equally. And also those answers, those decisions are somehow tracked back to a logical process so that can be explained which is something, for example in IBM is what an open scale does, is a solution that does specifically that . But you can look even broader. A friend of mine that I met in Brussels years ago when I was living there, today is the Head Assistant at the Hugging Face, which provides open source models. So companies that want to experiment and Vanguard studies about ethical AI there. They want to make sure, and many other people want to make sure that the moment AI will be able to do things that it cannot do right now, it will do it wisely and for the good, and because it has already creates problem. Just last decade, for example, the unfortunate event of the LAPD created what was called Predictive Policing, right? So it'll aid to detect, to predict where crime may occur more often based on historical data. The problem was that historical data was racially biased, and this came up afterwards. And so the program had to be shut down because obviously it was calling always on certain minorities or on certain racial groups, which obviously made it extremely unfair and very unethical. Although the premise was good, the way of doing was not. So this is, right now, what we need to work and we need to focus on. On the other end , we have also other unfortunate cases, like some countries do not want to embrace it. I speak for Italy, which actually banned the ChatGPT. Rather than govern it, rather than creating a governance , the government got scared and they simply banned it . So now if you want to use it from Italy, you cannot. I do not know if there are other countries. I mean, it's a shame to be honest and it's also blocking progress, in my opinion. But , it is what it is. So this is what the course is when AI is not regulated.

Garry Schleifer:

Yeah. Wow . Crazy. Okay, I'm gonna pivot a little bit here because I was thinking about this before we got together on this call. I see AI being supportive, right? Beforehand, perhaps analyzing a call, giving you some feedback, host a call, giving you some feedback again, you know, vetted by ethical and moral experts on how the system will actually work. I'm trying to picture how I might be using it during a call. So if I'm coaching you, is there a way to use or reason to use AI while you are coaching a client?

Andrea Paviglianiti:

Now I am being presumptive here, right? Because there may be a solution like that yet, but this is why I'm here, right?

Garry Schleifer:

Exactly. That's why you're doing the work you're doing.

Andrea Paviglianiti:

What I'm thinking is like , provided that there is the right context for it. The AI is going to record the call and separate person one from person two and get two different analysis and being able to find the logical connection in between question and answers, but it is also the subtle context, which is in between questions and answers. So as I was saying, like some emotional insight and at the end provide some insight like the coachee, at this point, was flinching because XXX. Next time you need to do better ipsum, ipsum ipsum, for example. So it'll provide you insight about how to do a better work of coaching. And of course, this is as any feedback to be taken as reserved , but it's something that then you can discuss, for example, with peers or with coaching supervisors. The person can tell you the same or can tell you something similar. It would be interesting to see it used also in like psychology and psychotherapy because it could actually get insights that without the due mindfulness and situational awareness, you may not get. We usually need to focus on a 20% of information. We are unable to catch up on everything. We also used to say 20% of the information provides 80% of result . The parental principle. But what if you have an aid that gets another 20%, for example, and that is where AI can help. It could even provide you insight what to do next with your coaching and then you can deliberate.

Garry Schleifer:

Okay. But that's all pre and post with the AI underneath you transcribing and analyzing while you talk. I think I'm answering my own question, we both are. There's nothing that an AI can do currently that will help you while you're coaching, because I think of the number one thing as being present with a client. You can't be present if you got an AI in your background going, okay, what about asking this question next? Or how about this question? Or did you notice that? It will not aid you in the coaching of the client?

Andrea Paviglianiti:

No, absolutely not. Because particularly to me, especially that part of mindfulness and situation awareness that you need. You have a distractor at the point.

Garry Schleifer:

Exactly. Exactly.

Andrea Paviglianiti:

But it can run in the background.

Garry Schleifer:

Yeah, totally.

Andrea Paviglianiti:

A coach can try, right? You are the coach. You have a coachee and you have an observer except the observer is not human.

Garry Schleifer:

Yeah. Yeah. Good point. Yeah, and again, that's supportive technology. It's not replacement technology. It's going to make us better, move us away from what a robot can do to be more, as you said , aware of emotions and critical thinking and that sort of thing. So, oh my goodness, the time is just flying by. I still wanted to ask you, and I don't know if this is a long answer, but what do you think about ethics and AI taking over specialized tasks that we normally would do ourselves?

Andrea Paviglianiti:

Well, you mentioned some fear before. The fear of unknown which, in this context, is the fear of losing the job. You learn a lot of stuff and then suddenly there is a robot doing it more efficiently than you. Efficiently not always means effective and with that I want to say this because it means that once again, in a continuous path of repetition, since every industrial revolution, every technological advancement, we have a shifted to other tasks. So for example, people were driving a horse to bring people around or mail. Now we don't use horses, we use trucks and we have cars as a taxi or people were kneading bread with the hand, and it'll take a lot of muscle to do that. Now you have a machine that can do that hard part, and then you can focus on detail and quality. And the same will be here with AI. It's going to make us a shift once more. This fear is a fear with exponential technology is even greater because it's not like you have a small technological advancement and then one century with nothing and is middle ages again. Here you have moved in a decade from a mobile phone to smartphone. And now from smartphone, like the smartphone are basically small computers.

Garry Schleifer:

No kidding.

Andrea Paviglianiti:

Google imagined that in 1998. So of course people are scared, but , uh, what I want to encourage people to do is instead to think how this new technology is going to make them to work less or better work smarter. Because my vision is this. If AI is able to do our tasks, then we may be as well as well do what we like, what we love , right? So that opens another ethical question because AI should be helpful to people. Companies are the ones which replace people. It's not the AI. So it's, again, the way use technology in order to improve society. But that's for another interview probably.

Garry Schleifer:

Oh, that's a whole nother interview and maybe another article. You never know. You said something I want to take away from that is it may be more efficient, but not necessarily more effective. That's a really, really good point. It One of the things I'm getting from this conversation in all of the AI that we've written about in this issue, is that if you continue to operate as a robotic coach delivering powerful questions that you've prepared or you use a system, a computer and AI can replace a system because you're basically telling it, I do this, then I do this, then I do this, then I do this. So it behooves us, and I know that's a cultural word, but it just reminds us that we should continue to be the best coaches that we can be, continue to hone our craft, to use AI tools to support mentor coaching and supervision perhaps, with the transcripts and all that sort of stuff, but to the end of us making ourselves unreplaceable.

Andrea Paviglianiti:

That's the point. That's the whole point. I mean, if you were a coachee, we had sessions with, let's say, called AI, because it comes with just one tone. At at least right now it cannot really range from a wide spectrum of emotions or simulation of emotions. How will you feel at the end of the session? Will you feel you established trust with the AI? Trust with AI, it's a different kind of trust. It's a relationship of trust. And the reason how data is collected by the AI, so it also impacted the concept of confidentiality, is that this data used to train the AI or not, and in which dimension? Yes, there is a lot of wood on the fire in there that, and that is why governance and trusted AI come once again, once again place. Maybe you have an idea how to implement AI in coaching and you need somebody which does it but you don't want that person or that company to do it for you only because otherwise they may miss the point. So that is where co-creation comes in, for example. And as you can see , then we get again into coaching because you need to coach the client. How do you envision your goal? And then what are the technologies in place and how do you want it to be in this scenario? And what do we establish at the end? What will be the success criteria? And there is, again, one more process of critical thinking of dialogue, and the most importantly of collaboration.

Garry Schleifer:

Yeah, it's a lot to do to. It's not a hundred percent usable. All those things that you spoke about are things that we need to consider, put in place, evaluate, have experts get their eyes on. So yeah, I'm not worried. I'm heading towards my MCC, so I'm doing my job to be a great and better coach. So more impactful. Andrea, thank you so much. I'd like to know what would you like our audience to do as a result of your article in this conversation?

Andrea Paviglianiti:

As a result of my article, I'd say to look more into this concept in details. Look into AI ethics, at least, to grasp the concept because it is easy to think AI is great. Let's implement it. What is more difficult is let's make it good so that our clients will be happy and we will not screw it up, right? Which is what happened already many times. And as a result of this conversation, maybe I would to suggest to rewind. I suggested a couple of readings, which actually fits very well with this conversation, tackling the problems are around AI. And so to better understand where artificial intelligence is heading to , and companies like IBM, Amazon and OpenAI with ChatGPT are already doing.

Garry Schleifer:

Lots of things to choose from. Thank you. What's the best way for people to reach you?

Andrea Paviglianiti:

The best place for people to reach me is , I guess my website, which is www.keytothink.com or even through LinkedIn or Instagram. Those are the places where I'm usually and reachable.

Garry Schleifer:

Awesome. There's so much more we could talk about. I, I could go on and on. Thank you so much for joining us for this Beyond the Page episode. That's it for this episode of Beyond the Page. For more episodes, subscribe via your favorite podcast app. And don't forget to leave us a review. Let other people know what you thought about it, whether it's something that they should dig into. We think it is. And while you're on our website, don't forget to sign up for a free digital issue of choice magazine. If you're not already a subscriber, go to ww dot choice line hyphen online.com and click the sign up now button. I'm Garry Schleifer, enjoy the Journey of mastery. Thanks, Andrea. Thank you as well.