Dear College of Science Faculty and Staff,ChatGPT, an artificial intelligence platform that can interact with you online (a ‘chatbot’). ChatGPT is particularly talented at understanding questions and formulating information, including adequately writing some homework assignments. There is excitement and alarm over the platform, including how it may undermine academic honesty. In my view, this is another opportunity for us to assess why and how we educate. I tell students at all levels that their Northeastern degree is valuable because it’s hard to achieve. I tell them that meeting the mind-stretching credentials of our education allows you to be entrusted with a top career or valuable next educational steps. But academic training should not be a lonely road or damagingly stressful, rather our education should (and often does) take place in comfortable communities with lots of support, encouragement, and confidence building. How students are mind-stretched and empowered in the face of developing technologies has been addressed over many years. It is embodied in President Aoun’s important framing of Humanics. Using typewriters or computers instead writing by hand was one hurdle. Using online searches rather than poring through hardcopy books was another. Spell-and-grammar checkers are totally useful. And now something that may assist with writing original text. ChatGPT does not need to compromise the power of higher education, but we may need to restructure courses and assignments. Let me know what you think - I’m delighted to support your educational innovations in the College of Science. All of this reminds me of the science fiction author Isaac Asimov’s Three Laws of Robotics. According to Wikipedia: “ The Three Laws, quoted from the "Handbook of Robotics, 56th Edition, 2058 A.D.", are:The very recent earthquake that has had devastating impacts in Syria and Turkey is fresh in our minds. Warmest wishes to those of you with family and friends in these areas, and we pray that each of your loved ones is safe. You’ve surely heard of
- First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.”
The Laws are a bit clunky, but I think they include lessons for Chat GPT. In the extensive ‘Ethics of AI’ discussions, including fake news, I’ve not seen a suggestion to build AI platforms around Asimov’s Laws, or some extension that includes all life. No AI platform will ever have the soul of Louise Erdrich, Cormac McCarthy or Toni Morrison. No AI platform will ever have the experiences or nuance of the actual person. Perhaps there will be a good facsimile, like how a movie can brilliantly evoke but is always less rich than the real thing. Perhaps the real goal is to get ChatGPT and lookalikes to do no harm.