The True Source of Intelligence
The True Source of Intelligence
“To know wisdom and instruction, to understand words of insight, to receive instructions in wise dealing, in righteousness, justice and equity; to give prudence to the simple, knowledge and discretion to the youth”
Proverbs 1-4, ESV
Proverbs 1-4, ESV
Introduction
A curious phenomenon occurred recently when I was shopping at a Walmart big-box store. Those locations have quite a reputation for odd dress and behavior, so nothing I see surprises me! However, on this occasion I stopped to do a double take as a robot rolled passed my aisle and turned down another one. It was checking shelves for stock purposes to see which items needed to be replaced. I greeted it but received no R2D2-like computer chirping reply!
I recall wondering how many humans that robot had replaced, but other than Elon Musk’s electric cars, the artificial intelligence (AI) technology that enabled the robot to function did not prompt a further desire to research the moral significance of the marvel. Until now!
K-12 schools have begun their fall terms, and a recent article raised the issue of the ethics of AI in these classrooms. I also recently read an article on the ways that AI may jeopardize the privacy of children’s health. My school-age grandchildren came to mind and my interest was immediately captured! We will begin by understanding the terminology.
I recall wondering how many humans that robot had replaced, but other than Elon Musk’s electric cars, the artificial intelligence (AI) technology that enabled the robot to function did not prompt a further desire to research the moral significance of the marvel. Until now!
K-12 schools have begun their fall terms, and a recent article raised the issue of the ethics of AI in these classrooms. I also recently read an article on the ways that AI may jeopardize the privacy of children’s health. My school-age grandchildren came to mind and my interest was immediately captured! We will begin by understanding the terminology.
What is Artificial Intelligence?
Artificial intelligence as a term was coined during a summer school held at the mathematics department of Dartmouth University in 1956 that was organized by John McCarthy, who said, “AI is the science and engineering of making intelligent machines.” The term is now used both for the intelligent machines that are the goal and for the science and technology that are aiming at that goal. [1]
Artificial Intelligence means “the ability of computer systems to perform tasks normally associated with humans.” [2] At its most basic level, AI enables computers to perform duties like operating a digital assistant and vacuuming a floor. Such machine learning does not require human input, but there is obviously much more to the technology. [3]
Artificial Intelligence means “the ability of computer systems to perform tasks normally associated with humans.” [2] At its most basic level, AI enables computers to perform duties like operating a digital assistant and vacuuming a floor. Such machine learning does not require human input, but there is obviously much more to the technology. [3]
Where may it be found?
Artificial Intelligence is ubiquitous and has many uses. It contributes to everything from a mortgage request to a movie selection. It governs our fitness devices and runs our appliances. Your child’s college application and the medical diagnostics of their health conditions, however, indicate the significant reach of this technology into previously private areas of family and personal life. This capability holds deep moral implications as we will read later in this article.
AI may be trained to make predictions and be programmed to make decisions, some of which are without human involvement. The process works through “data analysis and pattern identification.” For example, major sports leagues increasingly make use of data analysis to discover patterns that their opponents display. Team leaders will make in-game decisions based upon this process.
AI is watching you, too! Even though we are not professional athletes, data is mined from our online activities or by using sensors to observe the environment (e.g., cameras, thermometers, microphones, and motion sensors). AI, as professor Jeph Holloway states, holds both “promise and peril.”
AI may be trained to make predictions and be programmed to make decisions, some of which are without human involvement. The process works through “data analysis and pattern identification.” For example, major sports leagues increasingly make use of data analysis to discover patterns that their opponents display. Team leaders will make in-game decisions based upon this process.
AI is watching you, too! Even though we are not professional athletes, data is mined from our online activities or by using sensors to observe the environment (e.g., cameras, thermometers, microphones, and motion sensors). AI, as professor Jeph Holloway states, holds both “promise and peril.”
The Promises and Perils in AI
The promises. Much good is derived from the use of AI. Often-mundane tasks are made much easier because of this technology. Room cleaning, for example, plus home security systems, activity trackers for health and wellness, smartphones, and smart speakers like Google Home and Amazon Echo have made our lives much easier.
AI also makes accessible personal learning platforms for students, automated assessment systems to assist teachers, and facial recognition systems to produce insights into learners’ behaviors.
The perils. There also are burdens that are inherent in AI. Complex computer “algorithms” (i.e., instructions) are utilized to analyze data and often lead to somewhat surprising, unexplained, and even deeply disturbing, results (e.g., African Americans, many of whom pay cash for medical services, may not be included in healthcare algorithms that depend upon insurance data). This type of AI blind spot contributes to a potential harm when people are involved.
AI systems may be hurtful when bias occurs, because of assumptions made during the development process, prejudices in the training data, or design errors. Negative bias impacts vulnerable populations, including children.
AI may also pose safety risks when these systems are poorly designed or regulated, misused, or hacked. For example, loss of control over autonomous systems, like driver-less cars, has led to injury and even death. Children’s physical safety and private educational and health data may also be placed at risk.
AI-driven recommendations are often based on profiling. The content feeds people information based upon their preferences, thus creating “filter bubbles.” It can be improperly used to spread disinformation and bias, which may endanger “children’s ability to develop and to express themselves freely.” [4] Vulnerable children may be directly impacted through their digital activities or through decisions AI makes about them or their parents.
AI also makes accessible personal learning platforms for students, automated assessment systems to assist teachers, and facial recognition systems to produce insights into learners’ behaviors.
The perils. There also are burdens that are inherent in AI. Complex computer “algorithms” (i.e., instructions) are utilized to analyze data and often lead to somewhat surprising, unexplained, and even deeply disturbing, results (e.g., African Americans, many of whom pay cash for medical services, may not be included in healthcare algorithms that depend upon insurance data). This type of AI blind spot contributes to a potential harm when people are involved.
AI systems may be hurtful when bias occurs, because of assumptions made during the development process, prejudices in the training data, or design errors. Negative bias impacts vulnerable populations, including children.
AI may also pose safety risks when these systems are poorly designed or regulated, misused, or hacked. For example, loss of control over autonomous systems, like driver-less cars, has led to injury and even death. Children’s physical safety and private educational and health data may also be placed at risk.
AI-driven recommendations are often based on profiling. The content feeds people information based upon their preferences, thus creating “filter bubbles.” It can be improperly used to spread disinformation and bias, which may endanger “children’s ability to develop and to express themselves freely.” [4] Vulnerable children may be directly impacted through their digital activities or through decisions AI makes about them or their parents.
Ethical concerns for the core values of health and learning
There are overall concerns related to the builders of algorithms. Basically, algorithms reflect the values of those who build them. These creators hold positions of power and their worldviews are highly influential. On what set of values will human constructors and software programmers base their work? An ethical concern is that those who fashion these sets of instructions create a set of data that “represent society’s historical and systemic biases.” This reality ultimately transforms into “algorithmic bias.” Gender and racial biases may be inherent in different AI-based platforms. Health and education are two critical areas of importance for children.
Health concerns. Children represent a vulnerable population in any setting, so protecting their health rights is of fundamental importance. Each child has a right to equitable access and quality health care. An abundance of AI data supports health care, but the quality of it raises some concerns. Under-representation because of gender, race, age, and sexual orientation bias is a concern. These types of prejudices emerge during modeling and subsequently “diffuse” through the a resulting algorithm.
Secondly, there are concerns with the protection of privacy (see also below). For example, children’s rights are a concern, especially when a health diagnosis may lead to future discrimination “based on the data accumulated about a child, the child’s ability to protect his or her privacy, and their autonomy to make choices about their healthcare.” [5]
Educational concerns. Similar moral concerns, evident in health care, also appear in education. The areas that follow are widely recognized as moral issues that surface at the intersection of AI and education. [6]
A primary concern is privacy. Privacy violations occur when people expose an excessive amount of personal information in online platforms. Parents and their children very likely give consent without knowing or considering the extent of the information (metadata) they are sharing (e.g., language spoken, racial identity, biographical data, and location).
Surveillance or tracking systems gather detailed information about the preferences of students and teachers. AI tracking systems monitor activities and determine or predict future preferences and actions of their users. For example, students may feel insecure and unsafe if they know that AI systems are being used to surveil and police their thoughts and actions.
AI may hinder a child’s or teacher’s ability to act on her or his own interest and values (autonomy). The use of predictive systems which are based upon algorithms, for example, raise questions about “fairness and self-freedom.” There is a strong likelihood of perpetuating existing bias and prejudice regarding social discrimination and stratification.
Lastly, bias and discrimination are critical moral concerns when considering the ethics of AI in K-12 education. Unfairness is oftentimes embedded into machine-learning models. Racial bias has been associated, for example, with AI’s facial recognition systems. Facial recognition software has “improperly misidentified a number of African American and Latin American people as convicted felons.”
One UK grade standardization algorithm in use during the pandemic led to a score distribution that favored students who attended private or independent schools. Students from underrepresented groups were the most adversely effected. The potential for causing a disruption in final grades and derailing future careers became apparent. All is not lost, however. There are key ways that Christians may be moral influencers when AI goes wrong.
Health concerns. Children represent a vulnerable population in any setting, so protecting their health rights is of fundamental importance. Each child has a right to equitable access and quality health care. An abundance of AI data supports health care, but the quality of it raises some concerns. Under-representation because of gender, race, age, and sexual orientation bias is a concern. These types of prejudices emerge during modeling and subsequently “diffuse” through the a resulting algorithm.
Secondly, there are concerns with the protection of privacy (see also below). For example, children’s rights are a concern, especially when a health diagnosis may lead to future discrimination “based on the data accumulated about a child, the child’s ability to protect his or her privacy, and their autonomy to make choices about their healthcare.” [5]
Educational concerns. Similar moral concerns, evident in health care, also appear in education. The areas that follow are widely recognized as moral issues that surface at the intersection of AI and education. [6]
A primary concern is privacy. Privacy violations occur when people expose an excessive amount of personal information in online platforms. Parents and their children very likely give consent without knowing or considering the extent of the information (metadata) they are sharing (e.g., language spoken, racial identity, biographical data, and location).
Surveillance or tracking systems gather detailed information about the preferences of students and teachers. AI tracking systems monitor activities and determine or predict future preferences and actions of their users. For example, students may feel insecure and unsafe if they know that AI systems are being used to surveil and police their thoughts and actions.
AI may hinder a child’s or teacher’s ability to act on her or his own interest and values (autonomy). The use of predictive systems which are based upon algorithms, for example, raise questions about “fairness and self-freedom.” There is a strong likelihood of perpetuating existing bias and prejudice regarding social discrimination and stratification.
Lastly, bias and discrimination are critical moral concerns when considering the ethics of AI in K-12 education. Unfairness is oftentimes embedded into machine-learning models. Racial bias has been associated, for example, with AI’s facial recognition systems. Facial recognition software has “improperly misidentified a number of African American and Latin American people as convicted felons.”
One UK grade standardization algorithm in use during the pandemic led to a score distribution that favored students who attended private or independent schools. Students from underrepresented groups were the most adversely effected. The potential for causing a disruption in final grades and derailing future careers became apparent. All is not lost, however. There are key ways that Christians may be moral influencers when AI goes wrong.
Healthy Christian Stewardship of AI
A Christian approach to AI will necessarily be tempered by several biblical values. First, humankind has been fashioned in God’s image for an eternal purpose. As such, humankind has inherent dignity regardless of gender, race, socioeconomic level, intelligence, and education attainment. Christians will work to ensure that respect for the dignity of all human life is built into AI systems. This effort will require, as Barth wrote, for us to be imitators of God’s action.
Secondly, the injustices that are presently built into AI systems raise the questions of what worldviews are embedded and on what values these systems will be based. The question of worldview is made more complex by the fact that humans, which include Christians, want to have as much choice in matters as possible, but we too easily abdicate our choice and delegate it to machines. One part of the relinquishment is often due to the lack of a well-formulated and applied Christian worldview.
Thirdly, many Christians do not have a robust decision-making skill set with which to work. They, as a result, are unwittingly accepting of questionable moral systems and submit to a way of seeing the world that contradicts Christian core beliefs (cf. the value of all human life). In the case of children, they are already especially vulnerable to the worldviews of adults, and now even more so to AI creators.
Secondly, the injustices that are presently built into AI systems raise the questions of what worldviews are embedded and on what values these systems will be based. The question of worldview is made more complex by the fact that humans, which include Christians, want to have as much choice in matters as possible, but we too easily abdicate our choice and delegate it to machines. One part of the relinquishment is often due to the lack of a well-formulated and applied Christian worldview.
Thirdly, many Christians do not have a robust decision-making skill set with which to work. They, as a result, are unwittingly accepting of questionable moral systems and submit to a way of seeing the world that contradicts Christian core beliefs (cf. the value of all human life). In the case of children, they are already especially vulnerable to the worldviews of adults, and now even more so to AI creators.
Conclusion
“Some people worry that artificial intelligence will make us feel inferior, but then, anybody in his right mind should have an inferiority complex every time he looks at a flower” (Alan Kay). I am uncertain whether the writer is a Christian, but Christ-followers certainly appreciate the sentiment. There is something both awe-inspiring and humbling within creation around us–the universe that our God fashioned and governs by His word.
Indeed, the psalmist writes, “When I look at your heavens, the work of your fingers, the moon and the stars, which you have set in place, what is man that you are mindful of him, and the son of man that you care for him?” We are not unthinking “blobs of mud” as one writer likens humanity. There is no reason for us to blindly depend upon machines, lulled to moral somnolence by their wizardry. God has equipped us with a moral capacity to engage the world under His guidance.
Humankind has been imbued with God-given dignity and an amazing intellect that God expects to be used in stewarding the earth. Such intelligence is neither artificial nor meaningless. Instead, the use of our rational minds to create provides humankind with the opportunity to glorify God with all they do.
Larry C. Ashlock
Indeed, the psalmist writes, “When I look at your heavens, the work of your fingers, the moon and the stars, which you have set in place, what is man that you are mindful of him, and the son of man that you care for him?” We are not unthinking “blobs of mud” as one writer likens humanity. There is no reason for us to blindly depend upon machines, lulled to moral somnolence by their wizardry. God has equipped us with a moral capacity to engage the world under His guidance.
Humankind has been imbued with God-given dignity and an amazing intellect that God expects to be used in stewarding the earth. Such intelligence is neither artificial nor meaningless. Instead, the use of our rational minds to create provides humankind with the opportunity to glorify God with all they do.
Larry C. Ashlock
1. Lennox, John C. 2084 (pp. 16-17). Zondervan. Kindle Edition. Cf. also, Artificial intelligence (AI) is “a field of study that combines the applications of machine learning, algorithm productions, and natural language processing” (Akgun and Greenhow, “Artificial Intelligence in education: Addressing ethical challenges in K-12 settings,” Springer Nature, 9 July 2021). Furthermore, UNICEF defines Artificial intelligence (AI) technology as “. . .computers or machines that are programmed to perform tasks that we traditionally think only humans can do – by mimicking human thought or behaviour. [sic] This technology is used to make predictions (e.g. how a virus may spread), recommendations (e.g. what online videos to watch next), or decisions (e.g. how an essay should be graded).” (“AI and Children: AI guide for parents,” UNICEF, November 2021).
2. Ibid., Lennox, 2084.
3. Jeph Holloway, unpublished lecture, 06-22-2021.
4. AI and Children: AI guide for parents. UNICEF, November 2021.
5. Akgun and Greenhow, “Artificial Intelligence in education: Addressing ethical challenges in K-12 settings,” Springer Nature, 9 July 2021.
6. Ibid.
2. Ibid., Lennox, 2084.
3. Jeph Holloway, unpublished lecture, 06-22-2021.
4. AI and Children: AI guide for parents. UNICEF, November 2021.
5. Akgun and Greenhow, “Artificial Intelligence in education: Addressing ethical challenges in K-12 settings,” Springer Nature, 9 July 2021.
6. Ibid.
Posted in Pathway Perspectives