That’s just my point. That reduced version of AI is Artificial all right but it is not Intelligence any more. It is blind programming. To set breakfast on the table is fine in countries and cultures that have breakfast (in Kinshasa my Congolese students had no food before the evening meal after school), and even among those who have breakfast there is just so much variability that it is slightly more complex, and I am not speaking of all possible variations from one home to the next, from one family to the next, from one individual to the next. And what about a home or family where every individual has his own breakfast, or none? What about this house on the border between France and Belgium where the kitchen is cut in two by the border line precisely. On one side they have French breakfast in the morning and on the other side they have Belgian breakfast in the morning?
If AI didactic robots are programmed to perform a particular task, they are not intelligent because intelligence is to be able to change the tasks, or the programs according to the desires of the students, according to the reactions of the students, according to the most privately-kept emotions of the students in front of the agenda proposed to them. And what happens — as it did in one class last year — if the class of adults in continuing education get blocked and hijacked because one students has stinking shoes and feet? Can your AI didactic robot cope with the situation?
And don’t tell me we cannot consider all situations. We just have to be able to consider any situation that may present themselves in a class of ours. That’s why AI robots will NEVER replace teachers, even for simple drills: what does the AI machine do if a student refuse to participate, or if a student get flustered and maladjusted in front of that dehumanized drill? Of course we can consider such students, who are the minority, are just excluding themselves and have to be kept out of the class (and expel the student with stinky shoes and feet). That’s segregation but we can’t make anyone happy against their will, can we?