Part 1:

 

While reading "Humans in the Loop: The Design of Interactive AI Systems," there was one sentence towards the end that I thought was really interesting: “Interfaces should extend us. Build tools — things we can learn to use — instead of oracles, which give us the right answers but withhold an explanation.” The overall point of the article is that AI systems can serve as really powerful tools when they incorporate human input, but this sentence made me wonder if AI could ever go further by actually providing an explanation to humans and aiding in the learning process for humans. For me, it can oftentimes be frustrating when a “Big Red Button” system spits out an output, but it is difficult to understand how it got there. 

 

One applicable example for this problem is the example I always talk about: chess. Growing up, I meticulously reviewed my chess games after tournaments using a chess engine to see what I did well and what I did wrong. However, looking at a chess engine alone was never sufficient. The chess engine was not able to provide an explanation for why certain moves were good or bad, so I always needed to review my games with a coach to better understand how I could improve. 

 

I recently came across a website called decodechess.com that claims to solve this exact problem by generating explanations that are “similar to those of a human chess master.” While I have not tried it myself, I think that a technology like this has the potential to revolutionize the way humans learn. Not only do chess players now have easier access to explanations on how to improve, but they may also be able to completely change the way they train. Historically, the best way to learn chess skills has been to read chess books, which are normally filled with past games played by famous grandmasters and commentary on why certain moves are played. AlphaGo was also initially trained on human games, but the even more powerful version, AlphaGo Zero, was trained just by playing against itself. Humans may now have the opportunity to learn in a similar, more engaging way. Rather than tirelessly reading about other people’s games, people can just play games over and over with an AI and get direct advice on what to do differently.  

 

More generally, even though we may think of a tool as just a means to get the output we want, I think it can be really valuable if tools can provide a learning opportunity along the way. The learnings may help us do it on our own the next time we are tackling a similar problem or may help us provide more accurate inputs for the AI in the future.  

 

Part 2: